Feb 28 09:00:35 crc systemd[1]: Starting Kubernetes Kubelet... Feb 28 09:00:35 crc restorecon[4756]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:35 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 09:00:36 crc restorecon[4756]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 28 09:00:36 crc kubenswrapper[4996]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 09:00:36 crc kubenswrapper[4996]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 28 09:00:36 crc kubenswrapper[4996]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 09:00:36 crc kubenswrapper[4996]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 09:00:36 crc kubenswrapper[4996]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 28 09:00:36 crc kubenswrapper[4996]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.761039 4996 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769338 4996 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769391 4996 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769404 4996 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769413 4996 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769423 4996 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769432 4996 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769441 4996 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769449 4996 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769465 4996 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769474 4996 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769483 4996 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769491 4996 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769500 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769508 4996 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769516 4996 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769525 4996 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769534 4996 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769543 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769550 4996 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769558 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769566 4996 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769575 4996 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769584 4996 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769591 4996 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769599 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769607 4996 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769615 4996 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769622 4996 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769631 4996 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769638 4996 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769646 4996 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769653 4996 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769661 4996 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769675 4996 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769685 4996 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769696 4996 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769704 4996 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769713 4996 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769722 4996 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769730 4996 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769737 4996 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769745 4996 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769754 4996 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769761 4996 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769769 4996 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769776 4996 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769784 4996 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769791 4996 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769799 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769807 4996 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769815 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769823 4996 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769830 4996 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769838 4996 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769845 4996 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769853 4996 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769860 4996 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769872 4996 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769883 4996 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769892 4996 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769902 4996 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769911 4996 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769919 4996 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769929 4996 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769937 4996 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769945 4996 feature_gate.go:330] unrecognized feature gate: Example Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769964 4996 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769971 4996 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769980 4996 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769987 4996 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.769995 4996 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770660 4996 flags.go:64] FLAG: --address="0.0.0.0" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770689 4996 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770705 4996 flags.go:64] FLAG: --anonymous-auth="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770716 4996 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770728 4996 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770737 4996 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770749 4996 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770760 4996 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770770 4996 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770779 4996 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770789 4996 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770799 4996 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770808 4996 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770819 4996 flags.go:64] FLAG: --cgroup-root="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770828 4996 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770837 4996 flags.go:64] FLAG: --client-ca-file="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770846 4996 flags.go:64] FLAG: --cloud-config="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770855 4996 flags.go:64] FLAG: --cloud-provider="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770864 4996 flags.go:64] FLAG: --cluster-dns="[]" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770876 4996 flags.go:64] FLAG: --cluster-domain="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770885 4996 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770894 4996 flags.go:64] FLAG: --config-dir="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770903 4996 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770912 4996 flags.go:64] FLAG: --container-log-max-files="5" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770923 4996 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770932 4996 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770941 4996 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770951 4996 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770960 4996 flags.go:64] FLAG: --contention-profiling="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770969 4996 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770978 4996 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770988 4996 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.770996 4996 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771037 4996 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771047 4996 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771056 4996 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771065 4996 flags.go:64] FLAG: --enable-load-reader="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771073 4996 flags.go:64] FLAG: --enable-server="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771082 4996 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771093 4996 flags.go:64] FLAG: --event-burst="100" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771103 4996 flags.go:64] FLAG: --event-qps="50" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771113 4996 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771122 4996 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771132 4996 flags.go:64] FLAG: --eviction-hard="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771143 4996 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771152 4996 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771161 4996 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771172 4996 flags.go:64] FLAG: --eviction-soft="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771181 4996 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771190 4996 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771199 4996 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771208 4996 flags.go:64] FLAG: --experimental-mounter-path="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771217 4996 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771225 4996 flags.go:64] FLAG: --fail-swap-on="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771234 4996 flags.go:64] FLAG: --feature-gates="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771246 4996 flags.go:64] FLAG: --file-check-frequency="20s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771255 4996 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771264 4996 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771273 4996 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771282 4996 flags.go:64] FLAG: --healthz-port="10248" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771292 4996 flags.go:64] FLAG: --help="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771303 4996 flags.go:64] FLAG: --hostname-override="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771311 4996 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771321 4996 flags.go:64] FLAG: --http-check-frequency="20s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771330 4996 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771339 4996 flags.go:64] FLAG: --image-credential-provider-config="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771348 4996 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771357 4996 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771366 4996 flags.go:64] FLAG: --image-service-endpoint="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771374 4996 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771384 4996 flags.go:64] FLAG: --kube-api-burst="100" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771392 4996 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771402 4996 flags.go:64] FLAG: --kube-api-qps="50" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771411 4996 flags.go:64] FLAG: --kube-reserved="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771420 4996 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771430 4996 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771440 4996 flags.go:64] FLAG: --kubelet-cgroups="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771448 4996 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771457 4996 flags.go:64] FLAG: --lock-file="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771465 4996 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771474 4996 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771484 4996 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771509 4996 flags.go:64] FLAG: --log-json-split-stream="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771519 4996 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771528 4996 flags.go:64] FLAG: --log-text-split-stream="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771537 4996 flags.go:64] FLAG: --logging-format="text" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771546 4996 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771556 4996 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771565 4996 flags.go:64] FLAG: --manifest-url="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771573 4996 flags.go:64] FLAG: --manifest-url-header="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771584 4996 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771594 4996 flags.go:64] FLAG: --max-open-files="1000000" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771605 4996 flags.go:64] FLAG: --max-pods="110" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771614 4996 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771623 4996 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771632 4996 flags.go:64] FLAG: --memory-manager-policy="None" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771641 4996 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771650 4996 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771659 4996 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771668 4996 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771688 4996 flags.go:64] FLAG: --node-status-max-images="50" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771697 4996 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771707 4996 flags.go:64] FLAG: --oom-score-adj="-999" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771715 4996 flags.go:64] FLAG: --pod-cidr="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771725 4996 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771737 4996 flags.go:64] FLAG: --pod-manifest-path="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771746 4996 flags.go:64] FLAG: --pod-max-pids="-1" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771755 4996 flags.go:64] FLAG: --pods-per-core="0" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771764 4996 flags.go:64] FLAG: --port="10250" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771774 4996 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771782 4996 flags.go:64] FLAG: --provider-id="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771791 4996 flags.go:64] FLAG: --qos-reserved="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771800 4996 flags.go:64] FLAG: --read-only-port="10255" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771809 4996 flags.go:64] FLAG: --register-node="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771818 4996 flags.go:64] FLAG: --register-schedulable="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771827 4996 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771841 4996 flags.go:64] FLAG: --registry-burst="10" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771851 4996 flags.go:64] FLAG: --registry-qps="5" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771859 4996 flags.go:64] FLAG: --reserved-cpus="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771869 4996 flags.go:64] FLAG: --reserved-memory="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771880 4996 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771889 4996 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771898 4996 flags.go:64] FLAG: --rotate-certificates="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771907 4996 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771916 4996 flags.go:64] FLAG: --runonce="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771925 4996 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771934 4996 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771943 4996 flags.go:64] FLAG: --seccomp-default="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771952 4996 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771961 4996 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771970 4996 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771979 4996 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771989 4996 flags.go:64] FLAG: --storage-driver-password="root" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.771997 4996 flags.go:64] FLAG: --storage-driver-secure="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772031 4996 flags.go:64] FLAG: --storage-driver-table="stats" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772040 4996 flags.go:64] FLAG: --storage-driver-user="root" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772049 4996 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772058 4996 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772068 4996 flags.go:64] FLAG: --system-cgroups="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772076 4996 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772090 4996 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772099 4996 flags.go:64] FLAG: --tls-cert-file="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772108 4996 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772120 4996 flags.go:64] FLAG: --tls-min-version="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772128 4996 flags.go:64] FLAG: --tls-private-key-file="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772137 4996 flags.go:64] FLAG: --topology-manager-policy="none" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772146 4996 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772154 4996 flags.go:64] FLAG: --topology-manager-scope="container" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772164 4996 flags.go:64] FLAG: --v="2" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772180 4996 flags.go:64] FLAG: --version="false" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772192 4996 flags.go:64] FLAG: --vmodule="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772202 4996 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.772212 4996 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772414 4996 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772426 4996 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772438 4996 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772447 4996 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772457 4996 feature_gate.go:330] unrecognized feature gate: Example Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772467 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772476 4996 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772484 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772492 4996 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772500 4996 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772507 4996 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772516 4996 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772523 4996 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772534 4996 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772543 4996 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772555 4996 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772563 4996 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772572 4996 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772581 4996 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772594 4996 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772601 4996 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772609 4996 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772617 4996 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772625 4996 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772633 4996 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772640 4996 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772649 4996 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772657 4996 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772664 4996 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772672 4996 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772679 4996 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772687 4996 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772695 4996 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772705 4996 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772713 4996 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772720 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772731 4996 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772740 4996 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772749 4996 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772757 4996 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772765 4996 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772773 4996 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772782 4996 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772790 4996 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772798 4996 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772806 4996 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772816 4996 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772826 4996 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772836 4996 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772844 4996 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772852 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772862 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772871 4996 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772879 4996 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772886 4996 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772894 4996 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772901 4996 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772909 4996 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772917 4996 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772925 4996 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772932 4996 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772940 4996 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772947 4996 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772955 4996 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772963 4996 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772973 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772980 4996 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772989 4996 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.772997 4996 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.773028 4996 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.773036 4996 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.773757 4996 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.785769 4996 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.785838 4996 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.785987 4996 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786038 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786047 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786063 4996 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786075 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786085 4996 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786095 4996 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786106 4996 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786117 4996 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786125 4996 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786133 4996 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786141 4996 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786149 4996 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786157 4996 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786164 4996 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786172 4996 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786180 4996 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786188 4996 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786195 4996 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786204 4996 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786211 4996 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786219 4996 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786226 4996 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786235 4996 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786244 4996 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786252 4996 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786260 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786269 4996 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786277 4996 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786285 4996 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786292 4996 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786300 4996 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786308 4996 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786316 4996 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786326 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786334 4996 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786376 4996 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786388 4996 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786397 4996 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786406 4996 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786414 4996 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786421 4996 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786430 4996 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786438 4996 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786474 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786482 4996 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786490 4996 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786498 4996 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786506 4996 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786514 4996 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786524 4996 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786534 4996 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786543 4996 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786553 4996 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786562 4996 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786573 4996 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786581 4996 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786588 4996 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786596 4996 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786604 4996 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786612 4996 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786620 4996 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786628 4996 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786635 4996 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786643 4996 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786651 4996 feature_gate.go:330] unrecognized feature gate: Example Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786659 4996 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786666 4996 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786674 4996 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786682 4996 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786690 4996 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.786705 4996 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786935 4996 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786947 4996 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786956 4996 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786966 4996 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786976 4996 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786986 4996 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.786995 4996 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787050 4996 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787060 4996 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787070 4996 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787078 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787087 4996 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787095 4996 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787105 4996 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787115 4996 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787123 4996 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787135 4996 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787145 4996 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787154 4996 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787164 4996 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787174 4996 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787183 4996 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787192 4996 feature_gate.go:330] unrecognized feature gate: Example Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787201 4996 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787211 4996 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787222 4996 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787230 4996 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787240 4996 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787249 4996 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787257 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787265 4996 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787274 4996 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787282 4996 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787290 4996 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787299 4996 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787307 4996 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787315 4996 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787323 4996 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787330 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787338 4996 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787346 4996 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787353 4996 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787361 4996 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787369 4996 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787376 4996 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787384 4996 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787392 4996 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787400 4996 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787407 4996 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787415 4996 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787423 4996 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787432 4996 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787440 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787447 4996 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787455 4996 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787463 4996 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787471 4996 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787479 4996 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787486 4996 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787494 4996 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787501 4996 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787509 4996 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787517 4996 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787524 4996 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787532 4996 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787540 4996 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787548 4996 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787555 4996 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787563 4996 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787570 4996 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.787579 4996 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.787592 4996 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.787850 4996 server.go:940] "Client rotation is on, will bootstrap in background" Feb 28 09:00:36 crc kubenswrapper[4996]: E0228 09:00:36.792386 4996 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.796947 4996 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.797152 4996 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.801162 4996 server.go:997] "Starting client certificate rotation" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.801237 4996 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.802379 4996 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.830744 4996 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 09:00:36 crc kubenswrapper[4996]: E0228 09:00:36.833596 4996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.836798 4996 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.858266 4996 log.go:25] "Validated CRI v1 runtime API" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.902156 4996 log.go:25] "Validated CRI v1 image API" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.904266 4996 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.910393 4996 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-28-08-55-36-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.910499 4996 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.931054 4996 manager.go:217] Machine: {Timestamp:2026-02-28 09:00:36.929060045 +0000 UTC m=+0.619862887 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:51ba2d55-b443-4293-b707-d84c05817b7c BootID:939d05c1-c101-41a6-8708-5f2e09c96113 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:7e:18:c8 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:7e:18:c8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fd:63:2d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:60:bd:54 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f5:3d:d0 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:39:28:fb Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:70:0e:01 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e6:23:48:7a:99:fb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:98:1d:5f:88:06 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.931350 4996 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.931512 4996 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.931930 4996 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.932245 4996 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.932288 4996 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.933264 4996 topology_manager.go:138] "Creating topology manager with none policy" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.933292 4996 container_manager_linux.go:303] "Creating device plugin manager" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.934054 4996 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.934088 4996 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.934302 4996 state_mem.go:36] "Initialized new in-memory state store" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.934464 4996 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.937553 4996 kubelet.go:418] "Attempting to sync node with API server" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.937582 4996 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.937627 4996 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.937642 4996 kubelet.go:324] "Adding apiserver pod source" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.937654 4996 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.942837 4996 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.944001 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.944065 4996 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 28 09:00:36 crc kubenswrapper[4996]: E0228 09:00:36.944121 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.944128 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:36 crc kubenswrapper[4996]: E0228 09:00:36.944236 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.946916 4996 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.948868 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.948953 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.948984 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949064 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949131 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949169 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949186 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949214 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949235 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949255 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949279 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.949297 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.951056 4996 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.951889 4996 server.go:1280] "Started kubelet" Feb 28 09:00:36 crc systemd[1]: Started Kubernetes Kubelet. Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.954594 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.954718 4996 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.954711 4996 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.965269 4996 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.966587 4996 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.966664 4996 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 28 09:00:36 crc kubenswrapper[4996]: E0228 09:00:36.966826 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.966838 4996 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.966872 4996 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.968985 4996 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 28 09:00:36 crc kubenswrapper[4996]: W0228 09:00:36.969733 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.969858 4996 factory.go:55] Registering systemd factory Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.969892 4996 factory.go:221] Registration of the systemd container factory successfully Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.969822 4996 server.go:460] "Adding debug handlers to kubelet server" Feb 28 09:00:36 crc kubenswrapper[4996]: E0228 09:00:36.969954 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.970342 4996 factory.go:153] Registering CRI-O factory Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.970382 4996 factory.go:221] Registration of the crio container factory successfully Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.970596 4996 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.970660 4996 factory.go:103] Registering Raw factory Feb 28 09:00:36 crc kubenswrapper[4996]: E0228 09:00:36.969314 4996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18985d7b2add9382 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:36.951815042 +0000 UTC m=+0.642617893,LastTimestamp:2026-02-28 09:00:36.951815042 +0000 UTC m=+0.642617893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.970722 4996 manager.go:1196] Started watching for new ooms in manager Feb 28 09:00:36 crc kubenswrapper[4996]: E0228 09:00:36.971347 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.972031 4996 manager.go:319] Starting recovery of all containers Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979227 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979289 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979303 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979317 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979334 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979353 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979371 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979388 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979409 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979427 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979488 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979506 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979523 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979543 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979560 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979604 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979617 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979629 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979672 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979707 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979747 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979764 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979782 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979798 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979811 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979826 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979888 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979901 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979937 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979949 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979961 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979975 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979987 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.979998 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.980073 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.980087 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.983617 4996 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.983832 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.984171 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.984361 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.984516 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.984649 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.984817 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.984953 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.985154 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.985334 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.985461 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.985583 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.985722 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.985846 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.986144 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.986320 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.986444 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.986734 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.986884 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.987052 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.987276 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.987505 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.987710 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.987867 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.987994 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.988170 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.988295 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.988430 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.988599 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.988756 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.988984 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.989203 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.989327 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.989448 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.989589 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.989714 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.990093 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.990319 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.990620 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991268 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991424 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991460 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991490 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991524 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991552 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991581 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991674 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991728 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991754 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991779 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991806 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991836 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991861 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991885 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.991939 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992000 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992072 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992106 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992131 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992157 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992181 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992207 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992269 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992298 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992329 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992358 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992384 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992410 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992436 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992483 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992517 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992549 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992586 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992618 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992649 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992676 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992704 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992734 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992760 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992821 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992847 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992874 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992915 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992942 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992969 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.992995 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993059 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993097 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993123 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993162 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993186 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993255 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993281 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993316 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993342 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993367 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993404 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993429 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993454 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993478 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993506 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993533 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993575 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993615 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993643 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993679 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993707 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993744 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993771 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993807 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993835 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993868 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993897 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993924 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993948 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993973 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.993999 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994060 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994090 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994116 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994142 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994183 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994221 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994250 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994277 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994304 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994329 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994354 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994378 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994404 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994429 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994488 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994518 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994543 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994571 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994598 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.994625 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.998517 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.998679 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.998707 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.998742 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.998829 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.998847 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.998923 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.998985 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.999033 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.999049 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.999070 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.999091 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 28 09:00:36 crc kubenswrapper[4996]: I0228 09:00:36.999108 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999129 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999146 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999245 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999271 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999322 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999401 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999418 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999460 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999479 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999495 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999544 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999560 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999580 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999605 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999627 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999652 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999669 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999684 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999735 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999852 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999901 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999916 4996 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999930 4996 reconstruct.go:97] "Volume reconstruction finished" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:36.999998 4996 reconciler.go:26] "Reconciler: start to sync state" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.000237 4996 manager.go:324] Recovery completed Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.013255 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.017499 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.017574 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.017596 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.020867 4996 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.020892 4996 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.020997 4996 state_mem.go:36] "Initialized new in-memory state store" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.024812 4996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.028455 4996 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.031502 4996 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.031717 4996 kubelet.go:2335] "Starting kubelet main sync loop" Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.031907 4996 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 28 09:00:37 crc kubenswrapper[4996]: W0228 09:00:37.032698 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.032777 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.040387 4996 policy_none.go:49] "None policy: Start" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.042598 4996 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.042681 4996 state_mem.go:35] "Initializing new in-memory state store" Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.067218 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.121592 4996 manager.go:334] "Starting Device Plugin manager" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.122164 4996 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.122212 4996 server.go:79] "Starting device plugin registration server" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.122988 4996 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.123081 4996 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.123298 4996 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.123440 4996 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.123454 4996 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.132407 4996 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.132648 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.133636 4996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.134476 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.134578 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.134597 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.134866 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.136218 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.136303 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.136376 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.136447 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.136464 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.137002 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.137083 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.137436 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.137723 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.137751 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.137779 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.139227 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.139285 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.139307 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.141816 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.142161 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.142232 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.142593 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.142714 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.142818 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.144570 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.144607 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.144626 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.144948 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.145161 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.145235 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.147047 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.147079 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.147090 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.148642 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.148681 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.148698 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.148705 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.148768 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.148781 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.148981 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.149031 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.150254 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.150394 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.150471 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.172279 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204105 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204169 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204206 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204238 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204268 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204379 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204496 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204593 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204703 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204765 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204809 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204845 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204896 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204944 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.204991 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.258773 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.260035 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.260072 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.260083 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.260108 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.260592 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306194 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306525 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306589 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306669 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306708 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306738 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306775 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306936 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.306973 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307037 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307070 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307110 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307283 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307322 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307386 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307426 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307498 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307594 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307775 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307851 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307879 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307927 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307955 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307933 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.307997 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.308062 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.308067 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.308091 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.308150 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.308157 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.461815 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.463707 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.463951 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.464062 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.464083 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.464121 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.464749 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.471807 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.492360 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.498931 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.502799 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:37 crc kubenswrapper[4996]: W0228 09:00:37.521990 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-898b2e31661f86a9715ede99007f9d8135983f6d08271fb65b4554577fa9024e WatchSource:0}: Error finding container 898b2e31661f86a9715ede99007f9d8135983f6d08271fb65b4554577fa9024e: Status 404 returned error can't find the container with id 898b2e31661f86a9715ede99007f9d8135983f6d08271fb65b4554577fa9024e Feb 28 09:00:37 crc kubenswrapper[4996]: W0228 09:00:37.525579 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-d43975465072fd6e03ea862da2656c25a0a199053ff3473ad4f62422f435c9a0 WatchSource:0}: Error finding container d43975465072fd6e03ea862da2656c25a0a199053ff3473ad4f62422f435c9a0: Status 404 returned error can't find the container with id d43975465072fd6e03ea862da2656c25a0a199053ff3473ad4f62422f435c9a0 Feb 28 09:00:37 crc kubenswrapper[4996]: W0228 09:00:37.531203 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-cfed53f45bb10ae57df06b55d6228344e0ef59392b49951dd28adafa75967c90 WatchSource:0}: Error finding container cfed53f45bb10ae57df06b55d6228344e0ef59392b49951dd28adafa75967c90: Status 404 returned error can't find the container with id cfed53f45bb10ae57df06b55d6228344e0ef59392b49951dd28adafa75967c90 Feb 28 09:00:37 crc kubenswrapper[4996]: W0228 09:00:37.538068 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-5dea0d5971ccd34bf9db649f06b75cb797fd1c85a9604e16356d6484c1ceb1d9 WatchSource:0}: Error finding container 5dea0d5971ccd34bf9db649f06b75cb797fd1c85a9604e16356d6484c1ceb1d9: Status 404 returned error can't find the container with id 5dea0d5971ccd34bf9db649f06b75cb797fd1c85a9604e16356d6484c1ceb1d9 Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.573892 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Feb 28 09:00:37 crc kubenswrapper[4996]: W0228 09:00:37.747050 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.747174 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.865338 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.867076 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.867119 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.867132 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.867161 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:00:37 crc kubenswrapper[4996]: E0228 09:00:37.867696 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 28 09:00:37 crc kubenswrapper[4996]: I0228 09:00:37.956088 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:38 crc kubenswrapper[4996]: W0228 09:00:38.016091 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:38 crc kubenswrapper[4996]: E0228 09:00:38.016192 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.036387 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"898b2e31661f86a9715ede99007f9d8135983f6d08271fb65b4554577fa9024e"} Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.037653 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5dea0d5971ccd34bf9db649f06b75cb797fd1c85a9604e16356d6484c1ceb1d9"} Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.039055 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e1b2f8dd22dad7546d5b86b1362778a8172f34154f83f01540de5abd25c5cc86"} Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.040443 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cfed53f45bb10ae57df06b55d6228344e0ef59392b49951dd28adafa75967c90"} Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.041622 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d43975465072fd6e03ea862da2656c25a0a199053ff3473ad4f62422f435c9a0"} Feb 28 09:00:38 crc kubenswrapper[4996]: W0228 09:00:38.257477 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:38 crc kubenswrapper[4996]: E0228 09:00:38.257567 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:38 crc kubenswrapper[4996]: E0228 09:00:38.375332 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Feb 28 09:00:38 crc kubenswrapper[4996]: W0228 09:00:38.410164 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:38 crc kubenswrapper[4996]: E0228 09:00:38.410261 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.668546 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.674449 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.674502 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.674521 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.674551 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:00:38 crc kubenswrapper[4996]: E0228 09:00:38.675802 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.922133 4996 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 09:00:38 crc kubenswrapper[4996]: E0228 09:00:38.923141 4996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:38 crc kubenswrapper[4996]: I0228 09:00:38.957106 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.046829 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"14cfe234b07ea83736ecdebb6e64914bceec4444d7faf3d57168e8bf6d1ab315"} Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.046913 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f5f7ed00388bdbf05be73ad0d372e977cbfc875eeb82ae16cf1ef5a834d59903"} Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.046947 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541"} Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.046972 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2"} Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.046860 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.048267 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.048321 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.048343 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.049283 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058" exitCode=0 Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.049362 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058"} Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.049450 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.050519 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.050575 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.050601 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.051242 4996 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4" exitCode=0 Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.051329 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4"} Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.051380 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.052433 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.052611 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.052672 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.052701 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.053673 4996 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="fde55df655745f6ab3106173b40424d36550480ce5313e19446bffccd7256a48" exitCode=0 Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.053733 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"fde55df655745f6ab3106173b40424d36550480ce5313e19446bffccd7256a48"} Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.053823 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.054167 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.054223 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.054247 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.054677 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.054705 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.054716 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.056645 4996 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="83383bc28500bd80dabf45e1c490811d9495809cfdaf6450cf575a53b3f7aaf4" exitCode=0 Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.056706 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"83383bc28500bd80dabf45e1c490811d9495809cfdaf6450cf575a53b3f7aaf4"} Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.056803 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.058121 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.058155 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.058164 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.391800 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:39 crc kubenswrapper[4996]: W0228 09:00:39.866826 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:39 crc kubenswrapper[4996]: E0228 09:00:39.866933 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:39 crc kubenswrapper[4996]: I0228 09:00:39.956163 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:39 crc kubenswrapper[4996]: E0228 09:00:39.976214 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.064244 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"166188988cc1f2a66ac789f9c7336f5216402729447310633e8244b53dadb8ff"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.064269 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.065348 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.065373 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.065383 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.068514 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b3ee97da3b6d5344dbdf716bcd2ad26eb73f3814a680f4ca76ed40222e9a4a38"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.068539 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8fee7bb94b92e6dcb4ada0cfbb6fcd25ee6800820e3d1ec2d28fa55ac405c27d"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.068549 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6b052d2385ced8432d3ef836df06e53e5cddc8cdf549c8b83719e06526a0cd9e"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.068626 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.069863 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.069892 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.069910 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.073522 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.073549 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.073560 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.073569 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.078352 4996 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe" exitCode=0 Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.078478 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.078901 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.079032 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe"} Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.079292 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.079310 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.079319 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.083481 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.083511 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.083522 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.276816 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.277827 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.277855 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.277863 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:40 crc kubenswrapper[4996]: I0228 09:00:40.277882 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:00:40 crc kubenswrapper[4996]: E0228 09:00:40.278220 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.9:6443: connect: connection refused" node="crc" Feb 28 09:00:40 crc kubenswrapper[4996]: W0228 09:00:40.482083 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.9:6443: connect: connection refused Feb 28 09:00:40 crc kubenswrapper[4996]: E0228 09:00:40.482178 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.9:6443: connect: connection refused" logger="UnhandledError" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.086103 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4300dee463ab1e674193f14727ab47115becd3d7799cc25e5b063bc2ed6cca6"} Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.086221 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.087531 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.087575 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.087593 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.088242 4996 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b" exitCode=0 Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.088279 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b"} Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.088366 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.088371 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.088403 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.088547 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.088562 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.092586 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.092652 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.092675 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.092704 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.092678 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.092781 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.093133 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.093817 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.093860 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.094109 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.094179 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:41 crc kubenswrapper[4996]: I0228 09:00:41.094204 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.096325 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.096506 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5"} Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.097268 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9"} Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.097322 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf"} Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.097483 4996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.097579 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.098286 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.098345 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.098369 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.098860 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.098914 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.098938 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.323340 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.323508 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.325139 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.325206 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.325229 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.912180 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:42 crc kubenswrapper[4996]: I0228 09:00:42.923543 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.106423 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.106453 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.106424 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640"} Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.106654 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d"} Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.107905 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.107965 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.107990 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.109188 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.109262 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.109288 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.143979 4996 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.260564 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.409214 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.478349 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.480376 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.480459 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.480478 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:43 crc kubenswrapper[4996]: I0228 09:00:43.480520 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.108787 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.108844 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.110452 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.110504 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.110523 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.110557 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.110593 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.110621 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.226899 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.227178 4996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.227231 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.228790 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.228851 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.228874 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:44 crc kubenswrapper[4996]: I0228 09:00:44.594484 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.065598 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.111473 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.111472 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.111650 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114073 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114156 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114176 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114199 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114211 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114218 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114165 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114476 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.114500 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.324232 4996 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 09:00:45 crc kubenswrapper[4996]: I0228 09:00:45.324429 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 09:00:46 crc kubenswrapper[4996]: I0228 09:00:46.114645 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:46 crc kubenswrapper[4996]: I0228 09:00:46.116558 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:46 crc kubenswrapper[4996]: I0228 09:00:46.116614 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:46 crc kubenswrapper[4996]: I0228 09:00:46.116640 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:47 crc kubenswrapper[4996]: E0228 09:00:47.133726 4996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.398665 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.398869 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.400368 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.400437 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.400462 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.494357 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.494629 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.496160 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.496215 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:49 crc kubenswrapper[4996]: I0228 09:00:49.496235 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:50 crc kubenswrapper[4996]: W0228 09:00:50.869566 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 28 09:00:50 crc kubenswrapper[4996]: I0228 09:00:50.869705 4996 trace.go:236] Trace[211981017]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Feb-2026 09:00:40.867) (total time: 10001ms): Feb 28 09:00:50 crc kubenswrapper[4996]: Trace[211981017]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:00:50.869) Feb 28 09:00:50 crc kubenswrapper[4996]: Trace[211981017]: [10.001994393s] [10.001994393s] END Feb 28 09:00:50 crc kubenswrapper[4996]: E0228 09:00:50.869782 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 28 09:00:50 crc kubenswrapper[4996]: I0228 09:00:50.956170 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 28 09:00:51 crc kubenswrapper[4996]: W0228 09:00:51.469568 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 28 09:00:51 crc kubenswrapper[4996]: I0228 09:00:51.469694 4996 trace.go:236] Trace[788867094]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Feb-2026 09:00:41.467) (total time: 10002ms): Feb 28 09:00:51 crc kubenswrapper[4996]: Trace[788867094]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (09:00:51.469) Feb 28 09:00:51 crc kubenswrapper[4996]: Trace[788867094]: [10.002174388s] [10.002174388s] END Feb 28 09:00:51 crc kubenswrapper[4996]: E0228 09:00:51.469725 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 28 09:00:51 crc kubenswrapper[4996]: E0228 09:00:51.838087 4996 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 09:00:51 crc kubenswrapper[4996]: E0228 09:00:51.839782 4996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18985d7b2add9382 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:36.951815042 +0000 UTC m=+0.642617893,LastTimestamp:2026-02-28 09:00:36.951815042 +0000 UTC m=+0.642617893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:00:51 crc kubenswrapper[4996]: E0228 09:00:51.842430 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 28 09:00:51 crc kubenswrapper[4996]: E0228 09:00:51.851468 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z" node="crc" Feb 28 09:00:51 crc kubenswrapper[4996]: W0228 09:00:51.854882 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z Feb 28 09:00:51 crc kubenswrapper[4996]: E0228 09:00:51.854995 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 09:00:51 crc kubenswrapper[4996]: W0228 09:00:51.858894 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z Feb 28 09:00:51 crc kubenswrapper[4996]: E0228 09:00:51.858987 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 09:00:51 crc kubenswrapper[4996]: I0228 09:00:51.859587 4996 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 09:00:51 crc kubenswrapper[4996]: I0228 09:00:51.859670 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 28 09:00:51 crc kubenswrapper[4996]: I0228 09:00:51.866980 4996 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 09:00:51 crc kubenswrapper[4996]: I0228 09:00:51.867067 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 28 09:00:51 crc kubenswrapper[4996]: I0228 09:00:51.958506 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:51Z is after 2026-02-23T05:33:13Z Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.132905 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.135059 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4300dee463ab1e674193f14727ab47115becd3d7799cc25e5b063bc2ed6cca6" exitCode=255 Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.135107 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b4300dee463ab1e674193f14727ab47115becd3d7799cc25e5b063bc2ed6cca6"} Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.135236 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.136157 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.136180 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.136189 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.136581 4996 scope.go:117] "RemoveContainer" containerID="b4300dee463ab1e674193f14727ab47115becd3d7799cc25e5b063bc2ed6cca6" Feb 28 09:00:52 crc kubenswrapper[4996]: I0228 09:00:52.958576 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:52Z is after 2026-02-23T05:33:13Z Feb 28 09:00:53 crc kubenswrapper[4996]: I0228 09:00:53.140278 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 09:00:53 crc kubenswrapper[4996]: I0228 09:00:53.142370 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684"} Feb 28 09:00:53 crc kubenswrapper[4996]: I0228 09:00:53.142570 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:53 crc kubenswrapper[4996]: I0228 09:00:53.143740 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:53 crc kubenswrapper[4996]: I0228 09:00:53.143783 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:53 crc kubenswrapper[4996]: I0228 09:00:53.143798 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:53 crc kubenswrapper[4996]: I0228 09:00:53.961708 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:53Z is after 2026-02-23T05:33:13Z Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.157146 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.158784 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.162045 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684" exitCode=255 Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.162115 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684"} Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.162173 4996 scope.go:117] "RemoveContainer" containerID="b4300dee463ab1e674193f14727ab47115becd3d7799cc25e5b063bc2ed6cca6" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.162387 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.163851 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.163949 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.163981 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.165286 4996 scope.go:117] "RemoveContainer" containerID="d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684" Feb 28 09:00:54 crc kubenswrapper[4996]: E0228 09:00:54.165716 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.595036 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:54 crc kubenswrapper[4996]: W0228 09:00:54.834499 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:54Z is after 2026-02-23T05:33:13Z Feb 28 09:00:54 crc kubenswrapper[4996]: E0228 09:00:54.834606 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:54Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 09:00:54 crc kubenswrapper[4996]: I0228 09:00:54.960915 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:54Z is after 2026-02-23T05:33:13Z Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.074810 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.167654 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.171606 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.173057 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.173117 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.173142 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.174318 4996 scope.go:117] "RemoveContainer" containerID="d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684" Feb 28 09:00:55 crc kubenswrapper[4996]: E0228 09:00:55.174693 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.178424 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.324312 4996 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.324408 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 09:00:55 crc kubenswrapper[4996]: I0228 09:00:55.959364 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:55Z is after 2026-02-23T05:33:13Z Feb 28 09:00:56 crc kubenswrapper[4996]: I0228 09:00:56.174389 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:56 crc kubenswrapper[4996]: I0228 09:00:56.175944 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:56 crc kubenswrapper[4996]: I0228 09:00:56.176229 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:56 crc kubenswrapper[4996]: I0228 09:00:56.176423 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:56 crc kubenswrapper[4996]: I0228 09:00:56.177488 4996 scope.go:117] "RemoveContainer" containerID="d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684" Feb 28 09:00:56 crc kubenswrapper[4996]: E0228 09:00:56.177930 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:00:56 crc kubenswrapper[4996]: W0228 09:00:56.562131 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:56Z is after 2026-02-23T05:33:13Z Feb 28 09:00:56 crc kubenswrapper[4996]: E0228 09:00:56.562681 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 09:00:56 crc kubenswrapper[4996]: I0228 09:00:56.960996 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:00:56Z is after 2026-02-23T05:33:13Z Feb 28 09:00:57 crc kubenswrapper[4996]: E0228 09:00:57.134176 4996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 09:00:57 crc kubenswrapper[4996]: I0228 09:00:57.176722 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:57 crc kubenswrapper[4996]: I0228 09:00:57.178249 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:57 crc kubenswrapper[4996]: I0228 09:00:57.178317 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:57 crc kubenswrapper[4996]: I0228 09:00:57.178342 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:57 crc kubenswrapper[4996]: I0228 09:00:57.179383 4996 scope.go:117] "RemoveContainer" containerID="d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684" Feb 28 09:00:57 crc kubenswrapper[4996]: E0228 09:00:57.179746 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:00:57 crc kubenswrapper[4996]: I0228 09:00:57.963337 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:00:58 crc kubenswrapper[4996]: E0228 09:00:58.250422 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.252467 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.254144 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.254204 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.254225 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.254260 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:00:58 crc kubenswrapper[4996]: E0228 09:00:58.261208 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.287520 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.287780 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.289297 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.289358 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.289383 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.290199 4996 scope.go:117] "RemoveContainer" containerID="d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684" Feb 28 09:00:58 crc kubenswrapper[4996]: E0228 09:00:58.290478 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:00:58 crc kubenswrapper[4996]: I0228 09:00:58.959458 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:00:59 crc kubenswrapper[4996]: I0228 09:00:59.532183 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 28 09:00:59 crc kubenswrapper[4996]: I0228 09:00:59.532442 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:00:59 crc kubenswrapper[4996]: I0228 09:00:59.533989 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:00:59 crc kubenswrapper[4996]: I0228 09:00:59.534105 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:00:59 crc kubenswrapper[4996]: I0228 09:00:59.534148 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:00:59 crc kubenswrapper[4996]: I0228 09:00:59.550720 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 28 09:00:59 crc kubenswrapper[4996]: I0228 09:00:59.962357 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:00 crc kubenswrapper[4996]: I0228 09:01:00.177857 4996 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 09:01:00 crc kubenswrapper[4996]: I0228 09:01:00.189883 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:00 crc kubenswrapper[4996]: I0228 09:01:00.191509 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:00 crc kubenswrapper[4996]: I0228 09:01:00.191562 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:00 crc kubenswrapper[4996]: I0228 09:01:00.191580 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:00 crc kubenswrapper[4996]: I0228 09:01:00.199984 4996 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 28 09:01:00 crc kubenswrapper[4996]: I0228 09:01:00.960309 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.847101 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2add9382 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:36.951815042 +0000 UTC m=+0.642617893,LastTimestamp:2026-02-28 09:00:36.951815042 +0000 UTC m=+0.642617893,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.856200 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec8a31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,LastTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.864914 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec935b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,LastTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.872438 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec9771a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017605914 +0000 UTC m=+0.708408765,LastTimestamp:2026-02-28 09:00:37.017605914 +0000 UTC m=+0.708408765,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.879607 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b3524c71d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.124253469 +0000 UTC m=+0.815056320,LastTimestamp:2026-02-28 09:00:37.124253469 +0000 UTC m=+0.815056320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.884049 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec8a31a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec8a31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,LastTimestamp:2026-02-28 09:00:37.134507055 +0000 UTC m=+0.825309906,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.888870 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec935b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec935b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,LastTimestamp:2026-02-28 09:00:37.134590747 +0000 UTC m=+0.825393598,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.891571 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec9771a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec9771a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017605914 +0000 UTC m=+0.708408765,LastTimestamp:2026-02-28 09:00:37.134607318 +0000 UTC m=+0.825410169,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.897428 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec8a31a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec8a31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,LastTimestamp:2026-02-28 09:00:37.13642861 +0000 UTC m=+0.827231451,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.905188 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec935b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec935b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,LastTimestamp:2026-02-28 09:00:37.13645822 +0000 UTC m=+0.827261061,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.912835 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec9771a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec9771a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017605914 +0000 UTC m=+0.708408765,LastTimestamp:2026-02-28 09:00:37.136516552 +0000 UTC m=+0.827319403,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.920158 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec8a31a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec8a31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,LastTimestamp:2026-02-28 09:00:37.137744106 +0000 UTC m=+0.828546917,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.927738 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec935b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec935b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,LastTimestamp:2026-02-28 09:00:37.137775757 +0000 UTC m=+0.828578568,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.933923 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec9771a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec9771a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017605914 +0000 UTC m=+0.708408765,LastTimestamp:2026-02-28 09:00:37.137784498 +0000 UTC m=+0.828587309,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: W0228 09:01:01.941428 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.941496 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.941562 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec8a31a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec8a31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,LastTimestamp:2026-02-28 09:00:37.139271049 +0000 UTC m=+0.830073890,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.945301 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec935b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec935b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,LastTimestamp:2026-02-28 09:00:37.1392986 +0000 UTC m=+0.830101451,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.949252 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec9771a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec9771a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017605914 +0000 UTC m=+0.708408765,LastTimestamp:2026-02-28 09:00:37.139323231 +0000 UTC m=+0.830126082,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.955946 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec8a31a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec8a31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,LastTimestamp:2026-02-28 09:00:37.141851842 +0000 UTC m=+0.832654693,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.960620 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec935b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec935b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,LastTimestamp:2026-02-28 09:00:37.142185811 +0000 UTC m=+0.832988662,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: I0228 09:01:01.960893 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.963511 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec9771a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec9771a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017605914 +0000 UTC m=+0.708408765,LastTimestamp:2026-02-28 09:00:37.142242652 +0000 UTC m=+0.833045503,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.968330 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec8a31a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec8a31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,LastTimestamp:2026-02-28 09:00:37.144597738 +0000 UTC m=+0.835400589,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.973949 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec935b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec935b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,LastTimestamp:2026-02-28 09:00:37.144617919 +0000 UTC m=+0.835420770,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.979245 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec9771a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec9771a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017605914 +0000 UTC m=+0.708408765,LastTimestamp:2026-02-28 09:00:37.14466839 +0000 UTC m=+0.835471231,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.987198 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec8a31a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec8a31a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017551642 +0000 UTC m=+0.708354493,LastTimestamp:2026-02-28 09:00:37.147066647 +0000 UTC m=+0.837869458,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:01 crc kubenswrapper[4996]: E0228 09:01:01.994054 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18985d7b2ec935b5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18985d7b2ec935b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.017589173 +0000 UTC m=+0.708392024,LastTimestamp:2026-02-28 09:00:37.147084837 +0000 UTC m=+0.837887648,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.001615 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18985d7b4da19fad openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.535088557 +0000 UTC m=+1.225891368,LastTimestamp:2026-02-28 09:00:37.535088557 +0000 UTC m=+1.225891368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.007786 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7b4da1cba0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.535099808 +0000 UTC m=+1.225902669,LastTimestamp:2026-02-28 09:00:37.535099808 +0000 UTC m=+1.225902669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.013281 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b4db20876 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.536163958 +0000 UTC m=+1.226966769,LastTimestamp:2026-02-28 09:00:37.536163958 +0000 UTC m=+1.226966769,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.018178 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7b4dc08c63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.537115235 +0000 UTC m=+1.227918046,LastTimestamp:2026-02-28 09:00:37.537115235 +0000 UTC m=+1.227918046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.022891 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7b4e21f0a5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:37.543497893 +0000 UTC m=+1.234300694,LastTimestamp:2026-02-28 09:00:37.543497893 +0000 UTC m=+1.234300694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.029571 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7b71b967d1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.140626897 +0000 UTC m=+1.831429698,LastTimestamp:2026-02-28 09:00:38.140626897 +0000 UTC m=+1.831429698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.035701 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b71e50c15 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.143486997 +0000 UTC m=+1.834289818,LastTimestamp:2026-02-28 09:00:38.143486997 +0000 UTC m=+1.834289818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.043324 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7b72326363 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.148555619 +0000 UTC m=+1.839358440,LastTimestamp:2026-02-28 09:00:38.148555619 +0000 UTC m=+1.839358440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.048192 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18985d7b726b63ab openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.152291243 +0000 UTC m=+1.843094144,LastTimestamp:2026-02-28 09:00:38.152291243 +0000 UTC m=+1.843094144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.055723 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b727ba59f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.153356703 +0000 UTC m=+1.844159524,LastTimestamp:2026-02-28 09:00:38.153356703 +0000 UTC m=+1.844159524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.061226 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7b728caa6c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.154472044 +0000 UTC m=+1.845274855,LastTimestamp:2026-02-28 09:00:38.154472044 +0000 UTC m=+1.845274855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.068086 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b7296f075 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.155145333 +0000 UTC m=+1.845948134,LastTimestamp:2026-02-28 09:00:38.155145333 +0000 UTC m=+1.845948134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.074993 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7b72a5897a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.15610201 +0000 UTC m=+1.846904861,LastTimestamp:2026-02-28 09:00:38.15610201 +0000 UTC m=+1.846904861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.083216 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7b73b23023 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.173708323 +0000 UTC m=+1.864511134,LastTimestamp:2026-02-28 09:00:38.173708323 +0000 UTC m=+1.864511134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.089271 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18985d7b743db332 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.182851378 +0000 UTC m=+1.873654189,LastTimestamp:2026-02-28 09:00:38.182851378 +0000 UTC m=+1.873654189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.094874 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7b744099b7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.183041463 +0000 UTC m=+1.873844274,LastTimestamp:2026-02-28 09:00:38.183041463 +0000 UTC m=+1.873844274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.101804 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b859d5022 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.474330146 +0000 UTC m=+2.165132977,LastTimestamp:2026-02-28 09:00:38.474330146 +0000 UTC m=+2.165132977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.111101 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b866331e2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.48729853 +0000 UTC m=+2.178101381,LastTimestamp:2026-02-28 09:00:38.48729853 +0000 UTC m=+2.178101381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.118063 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b867a277b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.488803195 +0000 UTC m=+2.179606016,LastTimestamp:2026-02-28 09:00:38.488803195 +0000 UTC m=+2.179606016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.125229 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b943a8f92 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.719516562 +0000 UTC m=+2.410319413,LastTimestamp:2026-02-28 09:00:38.719516562 +0000 UTC m=+2.410319413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.131754 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b951f9c98 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.73452764 +0000 UTC m=+2.425330481,LastTimestamp:2026-02-28 09:00:38.73452764 +0000 UTC m=+2.425330481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.138991 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b95369f65 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.736035685 +0000 UTC m=+2.426838496,LastTimestamp:2026-02-28 09:00:38.736035685 +0000 UTC m=+2.426838496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.146673 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7ba1cc952b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.947190059 +0000 UTC m=+2.637992910,LastTimestamp:2026-02-28 09:00:38.947190059 +0000 UTC m=+2.637992910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.154234 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7ba2b802b0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.962619056 +0000 UTC m=+2.653421867,LastTimestamp:2026-02-28 09:00:38.962619056 +0000 UTC m=+2.653421867,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.160881 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7ba80df55d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.052137821 +0000 UTC m=+2.742940642,LastTimestamp:2026-02-28 09:00:39.052137821 +0000 UTC m=+2.742940642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.168699 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7ba864b007 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.057821703 +0000 UTC m=+2.748624524,LastTimestamp:2026-02-28 09:00:39.057821703 +0000 UTC m=+2.748624524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.174411 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18985d7ba864b6b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.057823413 +0000 UTC m=+2.748626264,LastTimestamp:2026-02-28 09:00:39.057823413 +0000 UTC m=+2.748626264,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.179519 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7ba883ceb2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.05986117 +0000 UTC m=+2.750664011,LastTimestamp:2026-02-28 09:00:39.05986117 +0000 UTC m=+2.750664011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.186386 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bb7dff41e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.317558302 +0000 UTC m=+3.008361153,LastTimestamp:2026-02-28 09:00:39.317558302 +0000 UTC m=+3.008361153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.192502 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18985d7bb7ec5e20 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.318371872 +0000 UTC m=+3.009174713,LastTimestamp:2026-02-28 09:00:39.318371872 +0000 UTC m=+3.009174713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.197757 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7bb7f240a4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.31875754 +0000 UTC m=+3.009560381,LastTimestamp:2026-02-28 09:00:39.31875754 +0000 UTC m=+3.009560381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.203519 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7bb7f7430f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.319085839 +0000 UTC m=+3.009888690,LastTimestamp:2026-02-28 09:00:39.319085839 +0000 UTC m=+3.009888690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.208852 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7bb89d2104 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.3299561 +0000 UTC m=+3.020758941,LastTimestamp:2026-02-28 09:00:39.3299561 +0000 UTC m=+3.020758941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.214369 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7bb8ac1761 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.330936673 +0000 UTC m=+3.021739514,LastTimestamp:2026-02-28 09:00:39.330936673 +0000 UTC m=+3.021739514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.220694 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bb8bfeabb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.332235963 +0000 UTC m=+3.023038774,LastTimestamp:2026-02-28 09:00:39.332235963 +0000 UTC m=+3.023038774,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.225589 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bb8cd805a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.333126234 +0000 UTC m=+3.023929055,LastTimestamp:2026-02-28 09:00:39.333126234 +0000 UTC m=+3.023929055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.231174 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18985d7bb8d8184a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.33382049 +0000 UTC m=+3.024623301,LastTimestamp:2026-02-28 09:00:39.33382049 +0000 UTC m=+3.024623301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.237081 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7bba2c7931 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.356127537 +0000 UTC m=+3.046930348,LastTimestamp:2026-02-28 09:00:39.356127537 +0000 UTC m=+3.046930348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.242631 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bc65c0ae9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.560571625 +0000 UTC m=+3.251374436,LastTimestamp:2026-02-28 09:00:39.560571625 +0000 UTC m=+3.251374436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.247990 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7bc65e40d2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.560716498 +0000 UTC m=+3.251519309,LastTimestamp:2026-02-28 09:00:39.560716498 +0000 UTC m=+3.251519309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.252111 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bc761a4d2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.577715922 +0000 UTC m=+3.268518753,LastTimestamp:2026-02-28 09:00:39.577715922 +0000 UTC m=+3.268518753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.257200 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bc771d91f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.578777887 +0000 UTC m=+3.269580718,LastTimestamp:2026-02-28 09:00:39.578777887 +0000 UTC m=+3.269580718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.263485 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7bc78f9e14 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.580728852 +0000 UTC m=+3.271531663,LastTimestamp:2026-02-28 09:00:39.580728852 +0000 UTC m=+3.271531663,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.268367 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7bc799c610 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.581394448 +0000 UTC m=+3.272197259,LastTimestamp:2026-02-28 09:00:39.581394448 +0000 UTC m=+3.272197259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.274523 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7bd30c69a9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.773456809 +0000 UTC m=+3.464259620,LastTimestamp:2026-02-28 09:00:39.773456809 +0000 UTC m=+3.464259620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.280213 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bd32adb0c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.775451916 +0000 UTC m=+3.466254747,LastTimestamp:2026-02-28 09:00:39.775451916 +0000 UTC m=+3.466254747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.286238 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bd44b146d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.794340973 +0000 UTC m=+3.485143804,LastTimestamp:2026-02-28 09:00:39.794340973 +0000 UTC m=+3.485143804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.291044 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bd458c82a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.795238954 +0000 UTC m=+3.486041775,LastTimestamp:2026-02-28 09:00:39.795238954 +0000 UTC m=+3.486041775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.297346 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18985d7bd48d3af5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.798676213 +0000 UTC m=+3.489479044,LastTimestamp:2026-02-28 09:00:39.798676213 +0000 UTC m=+3.489479044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.300748 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bdf7b5f4c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:39.982055244 +0000 UTC m=+3.672858095,LastTimestamp:2026-02-28 09:00:39.982055244 +0000 UTC m=+3.672858095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.301747 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7be0ed5d63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.006303075 +0000 UTC m=+3.697105886,LastTimestamp:2026-02-28 09:00:40.006303075 +0000 UTC m=+3.697105886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.309071 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7be1036369 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.007746409 +0000 UTC m=+3.698549220,LastTimestamp:2026-02-28 09:00:40.007746409 +0000 UTC m=+3.698549220,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.314279 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7be59ef921 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.085051681 +0000 UTC m=+3.775854492,LastTimestamp:2026-02-28 09:00:40.085051681 +0000 UTC m=+3.775854492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.318182 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7beca15b11 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.202648337 +0000 UTC m=+3.893451148,LastTimestamp:2026-02-28 09:00:40.202648337 +0000 UTC m=+3.893451148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.321719 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bed40a5aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.213087658 +0000 UTC m=+3.903890469,LastTimestamp:2026-02-28 09:00:40.213087658 +0000 UTC m=+3.903890469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.327211 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7bf1b7b625 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.287999525 +0000 UTC m=+3.978802336,LastTimestamp:2026-02-28 09:00:40.287999525 +0000 UTC m=+3.978802336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.332174 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7bf247589a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.297412762 +0000 UTC m=+3.988215573,LastTimestamp:2026-02-28 09:00:40.297412762 +0000 UTC m=+3.988215573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.339795 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c21ef152f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.096934703 +0000 UTC m=+4.787737534,LastTimestamp:2026-02-28 09:00:41.096934703 +0000 UTC m=+4.787737534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.345297 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c2f47f27d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.320862333 +0000 UTC m=+5.011665164,LastTimestamp:2026-02-28 09:00:41.320862333 +0000 UTC m=+5.011665164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.350758 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c2fe03a02 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.330842114 +0000 UTC m=+5.021644925,LastTimestamp:2026-02-28 09:00:41.330842114 +0000 UTC m=+5.021644925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.355523 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c2fefdfe2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.331867618 +0000 UTC m=+5.022670469,LastTimestamp:2026-02-28 09:00:41.331867618 +0000 UTC m=+5.022670469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.360209 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c3ebb955f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.580098911 +0000 UTC m=+5.270901762,LastTimestamp:2026-02-28 09:00:41.580098911 +0000 UTC m=+5.270901762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.364511 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c3fed74a2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.600144546 +0000 UTC m=+5.290947387,LastTimestamp:2026-02-28 09:00:41.600144546 +0000 UTC m=+5.290947387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.368918 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c400ce0f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.602203893 +0000 UTC m=+5.293006744,LastTimestamp:2026-02-28 09:00:41.602203893 +0000 UTC m=+5.293006744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.373926 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c4ec68bf2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.84925285 +0000 UTC m=+5.540055701,LastTimestamp:2026-02-28 09:00:41.84925285 +0000 UTC m=+5.540055701,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.378038 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c5012d1a4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.871028644 +0000 UTC m=+5.561831485,LastTimestamp:2026-02-28 09:00:41.871028644 +0000 UTC m=+5.561831485,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.383918 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c502f7920 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:41.872906528 +0000 UTC m=+5.563709369,LastTimestamp:2026-02-28 09:00:41.872906528 +0000 UTC m=+5.563709369,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.391313 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c5f1086f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:42.122536693 +0000 UTC m=+5.813339534,LastTimestamp:2026-02-28 09:00:42.122536693 +0000 UTC m=+5.813339534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.396125 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c5ffe98f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:42.138138865 +0000 UTC m=+5.828941726,LastTimestamp:2026-02-28 09:00:42.138138865 +0000 UTC m=+5.828941726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.401039 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c60141809 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:42.139547657 +0000 UTC m=+5.830350508,LastTimestamp:2026-02-28 09:00:42.139547657 +0000 UTC m=+5.830350508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.402290 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c6e4e960d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:42.378262029 +0000 UTC m=+6.069064850,LastTimestamp:2026-02-28 09:00:42.378262029 +0000 UTC m=+6.069064850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.406349 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18985d7c6f3b7e4c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:42.39378798 +0000 UTC m=+6.084590801,LastTimestamp:2026-02-28 09:00:42.39378798 +0000 UTC m=+6.084590801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.412921 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 09:01:02 crc kubenswrapper[4996]: &Event{ObjectMeta:{kube-controller-manager-crc.18985d7d1de81132 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 28 09:01:02 crc kubenswrapper[4996]: body: Feb 28 09:01:02 crc kubenswrapper[4996]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:45.324333362 +0000 UTC m=+9.015136213,LastTimestamp:2026-02-28 09:00:45.324333362 +0000 UTC m=+9.015136213,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 09:01:02 crc kubenswrapper[4996]: > Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.417186 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7d1dea7232 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:45.324489266 +0000 UTC m=+9.015292127,LastTimestamp:2026-02-28 09:00:45.324489266 +0000 UTC m=+9.015292127,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.421769 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 09:01:02 crc kubenswrapper[4996]: &Event{ObjectMeta:{kube-apiserver-crc.18985d7ea37100e6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 28 09:01:02 crc kubenswrapper[4996]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 09:01:02 crc kubenswrapper[4996]: Feb 28 09:01:02 crc kubenswrapper[4996]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:51.859644646 +0000 UTC m=+15.550447477,LastTimestamp:2026-02-28 09:00:51.859644646 +0000 UTC m=+15.550447477,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 09:01:02 crc kubenswrapper[4996]: > Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.429249 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7ea371e8e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:51.859704037 +0000 UTC m=+15.550506878,LastTimestamp:2026-02-28 09:00:51.859704037 +0000 UTC m=+15.550506878,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.433282 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18985d7ea37100e6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 09:01:02 crc kubenswrapper[4996]: &Event{ObjectMeta:{kube-apiserver-crc.18985d7ea37100e6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 28 09:01:02 crc kubenswrapper[4996]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 09:01:02 crc kubenswrapper[4996]: Feb 28 09:01:02 crc kubenswrapper[4996]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:51.859644646 +0000 UTC m=+15.550447477,LastTimestamp:2026-02-28 09:00:51.867047153 +0000 UTC m=+15.557849964,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 09:01:02 crc kubenswrapper[4996]: > Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.437761 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18985d7ea371e8e5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7ea371e8e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:51.859704037 +0000 UTC m=+15.550506878,LastTimestamp:2026-02-28 09:00:51.867094965 +0000 UTC m=+15.557897776,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.442632 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18985d7be1036369\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7be1036369 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.007746409 +0000 UTC m=+3.698549220,LastTimestamp:2026-02-28 09:00:52.137638032 +0000 UTC m=+15.828440853,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.447831 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18985d7beca15b11\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7beca15b11 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.202648337 +0000 UTC m=+3.893451148,LastTimestamp:2026-02-28 09:00:52.374263787 +0000 UTC m=+16.065066628,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.453733 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18985d7bed40a5aa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18985d7bed40a5aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:40.213087658 +0000 UTC m=+3.903890469,LastTimestamp:2026-02-28 09:00:52.389345349 +0000 UTC m=+16.080148200,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.462044 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 09:01:02 crc kubenswrapper[4996]: &Event{ObjectMeta:{kube-controller-manager-crc.18985d7f71f4c118 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 09:01:02 crc kubenswrapper[4996]: body: Feb 28 09:01:02 crc kubenswrapper[4996]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:55.32438556 +0000 UTC m=+19.015188411,LastTimestamp:2026-02-28 09:00:55.32438556 +0000 UTC m=+19.015188411,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 09:01:02 crc kubenswrapper[4996]: > Feb 28 09:01:02 crc kubenswrapper[4996]: E0228 09:01:02.470762 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7f71f5ae90 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:55.324446352 +0000 UTC m=+19.015249203,LastTimestamp:2026-02-28 09:00:55.324446352 +0000 UTC m=+19.015249203,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:02 crc kubenswrapper[4996]: I0228 09:01:02.964687 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:03 crc kubenswrapper[4996]: W0228 09:01:03.587642 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 28 09:01:03 crc kubenswrapper[4996]: E0228 09:01:03.588081 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 09:01:03 crc kubenswrapper[4996]: I0228 09:01:03.964053 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:04 crc kubenswrapper[4996]: W0228 09:01:04.611535 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 28 09:01:04 crc kubenswrapper[4996]: E0228 09:01:04.611610 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 09:01:04 crc kubenswrapper[4996]: I0228 09:01:04.963658 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:05 crc kubenswrapper[4996]: E0228 09:01:05.258393 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.261553 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.263215 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.263269 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.263293 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.263339 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:01:05 crc kubenswrapper[4996]: E0228 09:01:05.269042 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.323660 4996 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.324248 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.324507 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.324872 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.326834 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.326892 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.326914 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.328211 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.328475 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541" gracePeriod=30 Feb 28 09:01:05 crc kubenswrapper[4996]: E0228 09:01:05.330342 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18985d7f71f4c118\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 09:01:05 crc kubenswrapper[4996]: &Event{ObjectMeta:{kube-controller-manager-crc.18985d7f71f4c118 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 09:01:05 crc kubenswrapper[4996]: body: Feb 28 09:01:05 crc kubenswrapper[4996]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:55.32438556 +0000 UTC m=+19.015188411,LastTimestamp:2026-02-28 09:01:05.324103946 +0000 UTC m=+29.014906827,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 09:01:05 crc kubenswrapper[4996]: > Feb 28 09:01:05 crc kubenswrapper[4996]: E0228 09:01:05.338393 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18985d7f71f5ae90\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7f71f5ae90 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:55.324446352 +0000 UTC m=+19.015249203,LastTimestamp:2026-02-28 09:01:05.324431944 +0000 UTC m=+29.015234815,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:05 crc kubenswrapper[4996]: E0228 09:01:05.345601 4996 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d81c63ec5b5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:01:05.328457141 +0000 UTC m=+29.019259982,LastTimestamp:2026-02-28 09:01:05.328457141 +0000 UTC m=+29.019259982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:05 crc kubenswrapper[4996]: E0228 09:01:05.457441 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18985d7b7296f075\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b7296f075 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.155145333 +0000 UTC m=+1.845948134,LastTimestamp:2026-02-28 09:01:05.449988671 +0000 UTC m=+29.140791472,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:05 crc kubenswrapper[4996]: E0228 09:01:05.661754 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18985d7b859d5022\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b859d5022 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.474330146 +0000 UTC m=+2.165132977,LastTimestamp:2026-02-28 09:01:05.656069574 +0000 UTC m=+29.346872395,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:05 crc kubenswrapper[4996]: E0228 09:01:05.675681 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18985d7b866331e2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7b866331e2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:38.48729853 +0000 UTC m=+2.178101381,LastTimestamp:2026-02-28 09:01:05.669694691 +0000 UTC m=+29.360497512,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:05 crc kubenswrapper[4996]: I0228 09:01:05.961171 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.212572 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.213461 4996 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541" exitCode=255 Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.213506 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541"} Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.213565 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c128833e1f1b2f2a81cc9796fb6dee1675fa39b5a1c0dc266f6a5bf7dab6c98e"} Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.213821 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.215614 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.215672 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.215691 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:06 crc kubenswrapper[4996]: I0228 09:01:06.964627 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:07 crc kubenswrapper[4996]: E0228 09:01:07.134307 4996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 09:01:07 crc kubenswrapper[4996]: W0228 09:01:07.905897 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:07 crc kubenswrapper[4996]: E0228 09:01:07.905977 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 09:01:07 crc kubenswrapper[4996]: I0228 09:01:07.961168 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:08 crc kubenswrapper[4996]: I0228 09:01:08.963506 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:09 crc kubenswrapper[4996]: I0228 09:01:09.032302 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:09 crc kubenswrapper[4996]: I0228 09:01:09.033872 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:09 crc kubenswrapper[4996]: I0228 09:01:09.033930 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:09 crc kubenswrapper[4996]: I0228 09:01:09.033948 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:09 crc kubenswrapper[4996]: I0228 09:01:09.034816 4996 scope.go:117] "RemoveContainer" containerID="d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684" Feb 28 09:01:09 crc kubenswrapper[4996]: I0228 09:01:09.962605 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.228081 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.229564 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.232922 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="96e37f8a848ba48922362fdcb3d955adb6f118dbdfbddae9dc02979027952365" exitCode=255 Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.232984 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"96e37f8a848ba48922362fdcb3d955adb6f118dbdfbddae9dc02979027952365"} Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.233079 4996 scope.go:117] "RemoveContainer" containerID="d4c8ab920ab4eb7a2aa1488c7df526d73b147368793633d0e538de3ff6aee684" Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.233372 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.234778 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.234978 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.235164 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.236827 4996 scope.go:117] "RemoveContainer" containerID="96e37f8a848ba48922362fdcb3d955adb6f118dbdfbddae9dc02979027952365" Feb 28 09:01:10 crc kubenswrapper[4996]: E0228 09:01:10.237270 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:01:10 crc kubenswrapper[4996]: I0228 09:01:10.963553 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:11 crc kubenswrapper[4996]: I0228 09:01:11.241490 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 09:01:11 crc kubenswrapper[4996]: I0228 09:01:11.964636 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:12 crc kubenswrapper[4996]: E0228 09:01:12.267065 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.269321 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.270995 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.271086 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.271111 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.271155 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:01:12 crc kubenswrapper[4996]: E0228 09:01:12.280162 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.323716 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.323910 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.326056 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.326123 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.326150 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:12 crc kubenswrapper[4996]: I0228 09:01:12.962612 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:13 crc kubenswrapper[4996]: I0228 09:01:13.261117 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:01:13 crc kubenswrapper[4996]: I0228 09:01:13.261716 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:13 crc kubenswrapper[4996]: I0228 09:01:13.263235 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:13 crc kubenswrapper[4996]: I0228 09:01:13.263296 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:13 crc kubenswrapper[4996]: I0228 09:01:13.263314 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:13 crc kubenswrapper[4996]: I0228 09:01:13.963533 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:14 crc kubenswrapper[4996]: I0228 09:01:14.595137 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:01:14 crc kubenswrapper[4996]: I0228 09:01:14.595392 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:14 crc kubenswrapper[4996]: I0228 09:01:14.597245 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:14 crc kubenswrapper[4996]: I0228 09:01:14.597311 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:14 crc kubenswrapper[4996]: I0228 09:01:14.597331 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:14 crc kubenswrapper[4996]: I0228 09:01:14.598433 4996 scope.go:117] "RemoveContainer" containerID="96e37f8a848ba48922362fdcb3d955adb6f118dbdfbddae9dc02979027952365" Feb 28 09:01:14 crc kubenswrapper[4996]: E0228 09:01:14.598755 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:01:14 crc kubenswrapper[4996]: I0228 09:01:14.961234 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:15 crc kubenswrapper[4996]: I0228 09:01:15.324725 4996 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 09:01:15 crc kubenswrapper[4996]: I0228 09:01:15.324929 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 09:01:15 crc kubenswrapper[4996]: E0228 09:01:15.333078 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18985d7f71f4c118\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 09:01:15 crc kubenswrapper[4996]: &Event{ObjectMeta:{kube-controller-manager-crc.18985d7f71f4c118 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 09:01:15 crc kubenswrapper[4996]: body: Feb 28 09:01:15 crc kubenswrapper[4996]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:55.32438556 +0000 UTC m=+19.015188411,LastTimestamp:2026-02-28 09:01:15.324805185 +0000 UTC m=+39.015608026,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 09:01:15 crc kubenswrapper[4996]: > Feb 28 09:01:15 crc kubenswrapper[4996]: E0228 09:01:15.339926 4996 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18985d7f71f5ae90\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18985d7f71f5ae90 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:00:55.324446352 +0000 UTC m=+19.015249203,LastTimestamp:2026-02-28 09:01:15.32499221 +0000 UTC m=+39.015795051,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:01:15 crc kubenswrapper[4996]: I0228 09:01:15.962761 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:16 crc kubenswrapper[4996]: I0228 09:01:16.959777 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:17 crc kubenswrapper[4996]: E0228 09:01:17.134576 4996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 09:01:17 crc kubenswrapper[4996]: I0228 09:01:17.960393 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:18 crc kubenswrapper[4996]: I0228 09:01:18.287926 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:01:18 crc kubenswrapper[4996]: I0228 09:01:18.288215 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:18 crc kubenswrapper[4996]: I0228 09:01:18.289874 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:18 crc kubenswrapper[4996]: I0228 09:01:18.289932 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:18 crc kubenswrapper[4996]: I0228 09:01:18.289952 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:18 crc kubenswrapper[4996]: I0228 09:01:18.290966 4996 scope.go:117] "RemoveContainer" containerID="96e37f8a848ba48922362fdcb3d955adb6f118dbdfbddae9dc02979027952365" Feb 28 09:01:18 crc kubenswrapper[4996]: E0228 09:01:18.291326 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:01:18 crc kubenswrapper[4996]: I0228 09:01:18.959811 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:19 crc kubenswrapper[4996]: E0228 09:01:19.272931 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 09:01:19 crc kubenswrapper[4996]: I0228 09:01:19.280616 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:19 crc kubenswrapper[4996]: I0228 09:01:19.282347 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:19 crc kubenswrapper[4996]: I0228 09:01:19.282409 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:19 crc kubenswrapper[4996]: I0228 09:01:19.282431 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:19 crc kubenswrapper[4996]: I0228 09:01:19.282469 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:01:19 crc kubenswrapper[4996]: E0228 09:01:19.287480 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 09:01:19 crc kubenswrapper[4996]: I0228 09:01:19.963007 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:20 crc kubenswrapper[4996]: I0228 09:01:20.960555 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:21 crc kubenswrapper[4996]: I0228 09:01:21.959873 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:22 crc kubenswrapper[4996]: I0228 09:01:22.329492 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:01:22 crc kubenswrapper[4996]: I0228 09:01:22.329657 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:22 crc kubenswrapper[4996]: I0228 09:01:22.331043 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:22 crc kubenswrapper[4996]: I0228 09:01:22.331121 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:22 crc kubenswrapper[4996]: I0228 09:01:22.331148 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:22 crc kubenswrapper[4996]: I0228 09:01:22.334078 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:01:22 crc kubenswrapper[4996]: I0228 09:01:22.961151 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:23 crc kubenswrapper[4996]: I0228 09:01:23.283951 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:23 crc kubenswrapper[4996]: I0228 09:01:23.284745 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:23 crc kubenswrapper[4996]: I0228 09:01:23.284777 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:23 crc kubenswrapper[4996]: I0228 09:01:23.284790 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:23 crc kubenswrapper[4996]: I0228 09:01:23.960636 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:24 crc kubenswrapper[4996]: W0228 09:01:24.402723 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 28 09:01:24 crc kubenswrapper[4996]: E0228 09:01:24.402777 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 09:01:24 crc kubenswrapper[4996]: W0228 09:01:24.736464 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 28 09:01:24 crc kubenswrapper[4996]: E0228 09:01:24.736512 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 09:01:24 crc kubenswrapper[4996]: I0228 09:01:24.961933 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:25 crc kubenswrapper[4996]: I0228 09:01:25.962343 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:26 crc kubenswrapper[4996]: E0228 09:01:26.277031 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 09:01:26 crc kubenswrapper[4996]: I0228 09:01:26.288128 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:26 crc kubenswrapper[4996]: I0228 09:01:26.289097 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:26 crc kubenswrapper[4996]: I0228 09:01:26.289134 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:26 crc kubenswrapper[4996]: I0228 09:01:26.289143 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:26 crc kubenswrapper[4996]: I0228 09:01:26.289165 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:01:26 crc kubenswrapper[4996]: E0228 09:01:26.293551 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 09:01:26 crc kubenswrapper[4996]: I0228 09:01:26.959507 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:27 crc kubenswrapper[4996]: E0228 09:01:27.134687 4996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 09:01:27 crc kubenswrapper[4996]: I0228 09:01:27.857776 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 09:01:27 crc kubenswrapper[4996]: I0228 09:01:27.857943 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:27 crc kubenswrapper[4996]: I0228 09:01:27.859058 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:27 crc kubenswrapper[4996]: I0228 09:01:27.859092 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:27 crc kubenswrapper[4996]: I0228 09:01:27.859105 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:27 crc kubenswrapper[4996]: I0228 09:01:27.963684 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:28 crc kubenswrapper[4996]: W0228 09:01:28.305267 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 28 09:01:28 crc kubenswrapper[4996]: E0228 09:01:28.305333 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 09:01:28 crc kubenswrapper[4996]: I0228 09:01:28.962343 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:29 crc kubenswrapper[4996]: I0228 09:01:29.962659 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:30 crc kubenswrapper[4996]: I0228 09:01:30.962686 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:31 crc kubenswrapper[4996]: I0228 09:01:31.959095 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:32 crc kubenswrapper[4996]: I0228 09:01:32.964186 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.033046 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.034657 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.034723 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.034747 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.035713 4996 scope.go:117] "RemoveContainer" containerID="96e37f8a848ba48922362fdcb3d955adb6f118dbdfbddae9dc02979027952365" Feb 28 09:01:33 crc kubenswrapper[4996]: W0228 09:01:33.100894 4996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:33 crc kubenswrapper[4996]: E0228 09:01:33.100966 4996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 09:01:33 crc kubenswrapper[4996]: E0228 09:01:33.283794 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.294170 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.296287 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.296353 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.296437 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.296480 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:01:33 crc kubenswrapper[4996]: E0228 09:01:33.303438 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.309572 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.311633 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c"} Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.311780 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.312613 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.312682 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.312702 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:33 crc kubenswrapper[4996]: I0228 09:01:33.962753 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.317151 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.317915 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.320237 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" exitCode=255 Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.320309 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c"} Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.320378 4996 scope.go:117] "RemoveContainer" containerID="96e37f8a848ba48922362fdcb3d955adb6f118dbdfbddae9dc02979027952365" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.320565 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.321778 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.321838 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.321859 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.326477 4996 scope.go:117] "RemoveContainer" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" Feb 28 09:01:34 crc kubenswrapper[4996]: E0228 09:01:34.326761 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.595170 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:01:34 crc kubenswrapper[4996]: I0228 09:01:34.962997 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:35 crc kubenswrapper[4996]: I0228 09:01:35.326513 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 09:01:35 crc kubenswrapper[4996]: I0228 09:01:35.330355 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:35 crc kubenswrapper[4996]: I0228 09:01:35.331577 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:35 crc kubenswrapper[4996]: I0228 09:01:35.331646 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:35 crc kubenswrapper[4996]: I0228 09:01:35.331671 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:35 crc kubenswrapper[4996]: I0228 09:01:35.332720 4996 scope.go:117] "RemoveContainer" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" Feb 28 09:01:35 crc kubenswrapper[4996]: E0228 09:01:35.333120 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:01:35 crc kubenswrapper[4996]: I0228 09:01:35.958188 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:36 crc kubenswrapper[4996]: I0228 09:01:36.962810 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:37 crc kubenswrapper[4996]: E0228 09:01:37.135095 4996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 09:01:37 crc kubenswrapper[4996]: I0228 09:01:37.964294 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:38 crc kubenswrapper[4996]: I0228 09:01:38.288257 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:01:38 crc kubenswrapper[4996]: I0228 09:01:38.288811 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:38 crc kubenswrapper[4996]: I0228 09:01:38.290835 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:38 crc kubenswrapper[4996]: I0228 09:01:38.290889 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:38 crc kubenswrapper[4996]: I0228 09:01:38.290909 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:38 crc kubenswrapper[4996]: I0228 09:01:38.291784 4996 scope.go:117] "RemoveContainer" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" Feb 28 09:01:38 crc kubenswrapper[4996]: E0228 09:01:38.292209 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:01:38 crc kubenswrapper[4996]: I0228 09:01:38.963057 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:39 crc kubenswrapper[4996]: I0228 09:01:39.961277 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:40 crc kubenswrapper[4996]: E0228 09:01:40.290584 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 09:01:40 crc kubenswrapper[4996]: I0228 09:01:40.303561 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:40 crc kubenswrapper[4996]: I0228 09:01:40.304737 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:40 crc kubenswrapper[4996]: I0228 09:01:40.304822 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:40 crc kubenswrapper[4996]: I0228 09:01:40.304846 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:40 crc kubenswrapper[4996]: I0228 09:01:40.304887 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:01:40 crc kubenswrapper[4996]: E0228 09:01:40.309752 4996 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 09:01:40 crc kubenswrapper[4996]: I0228 09:01:40.963548 4996 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 09:01:41 crc kubenswrapper[4996]: I0228 09:01:41.549223 4996 csr.go:261] certificate signing request csr-q29rt is approved, waiting to be issued Feb 28 09:01:41 crc kubenswrapper[4996]: I0228 09:01:41.559186 4996 csr.go:257] certificate signing request csr-q29rt is issued Feb 28 09:01:41 crc kubenswrapper[4996]: I0228 09:01:41.628804 4996 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 28 09:01:41 crc kubenswrapper[4996]: I0228 09:01:41.801779 4996 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 28 09:01:42 crc kubenswrapper[4996]: I0228 09:01:42.032390 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:42 crc kubenswrapper[4996]: I0228 09:01:42.033989 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:42 crc kubenswrapper[4996]: I0228 09:01:42.034043 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:42 crc kubenswrapper[4996]: I0228 09:01:42.034054 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:42 crc kubenswrapper[4996]: I0228 09:01:42.560868 4996 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-26 18:08:10.693464327 +0000 UTC Feb 28 09:01:42 crc kubenswrapper[4996]: I0228 09:01:42.560919 4996 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6513h6m28.132549047s for next certificate rotation Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.136363 4996 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.310113 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.311463 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.311515 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.311530 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.311647 4996 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.321046 4996 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.321424 4996 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.321519 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.325052 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.325080 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.325107 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.325123 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.325133 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:47Z","lastTransitionTime":"2026-02-28T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.337453 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.345055 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.345106 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.345139 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.345157 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.345171 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:47Z","lastTransitionTime":"2026-02-28T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.356670 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.364453 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.364639 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.364729 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.364813 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.364899 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:47Z","lastTransitionTime":"2026-02-28T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.377466 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.385814 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.385843 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.385854 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.385868 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:47 crc kubenswrapper[4996]: I0228 09:01:47.385879 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:47Z","lastTransitionTime":"2026-02-28T09:01:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.398738 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.398849 4996 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.398875 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.499463 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.600506 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.701463 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.801889 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:47 crc kubenswrapper[4996]: E0228 09:01:47.903300 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.004710 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.105954 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.206911 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.307244 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.407929 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.508346 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.609028 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.709789 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.810442 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:48 crc kubenswrapper[4996]: E0228 09:01:48.910867 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.011799 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.112865 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.213873 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.314790 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.415789 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.516853 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.617362 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.718223 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.819364 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:49 crc kubenswrapper[4996]: E0228 09:01:49.920199 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.020738 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.121859 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.222689 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.323579 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.424706 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.525872 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.626555 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.726893 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.827888 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:50 crc kubenswrapper[4996]: E0228 09:01:50.928922 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.029954 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.131033 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.231492 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.331884 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.432846 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.533370 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.633723 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.734786 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.834981 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:51 crc kubenswrapper[4996]: E0228 09:01:51.936105 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.036227 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.137170 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.238162 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.338729 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.438912 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.539662 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.640444 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.741375 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.841659 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:52 crc kubenswrapper[4996]: E0228 09:01:52.942232 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: I0228 09:01:53.032635 4996 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 09:01:53 crc kubenswrapper[4996]: I0228 09:01:53.034503 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:53 crc kubenswrapper[4996]: I0228 09:01:53.034601 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:53 crc kubenswrapper[4996]: I0228 09:01:53.034620 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:53 crc kubenswrapper[4996]: I0228 09:01:53.035497 4996 scope.go:117] "RemoveContainer" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.035786 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.043283 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.144519 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.244731 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.345397 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.446453 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.547542 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.648612 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.749672 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.850738 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:53 crc kubenswrapper[4996]: E0228 09:01:53.951288 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.051890 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.152053 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.253091 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.354265 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.454459 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.554860 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.562674 4996 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.656043 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.756446 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.857178 4996 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.926871 4996 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.960283 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.960366 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.960385 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.960464 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.960491 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:54Z","lastTransitionTime":"2026-02-28T09:01:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.978285 4996 apiserver.go:52] "Watching apiserver" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.985829 4996 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.986155 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.986614 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.986745 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.986817 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.987115 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.987375 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.987708 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.988089 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.988187 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:54 crc kubenswrapper[4996]: E0228 09:01:54.988268 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.988506 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.989525 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.989751 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.989814 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.990256 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.991316 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.992962 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.993281 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 28 09:01:54 crc kubenswrapper[4996]: I0228 09:01:54.993610 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.032716 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.048751 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.062327 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.062990 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.063097 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.063124 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.063156 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.063180 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:55Z","lastTransitionTime":"2026-02-28T09:01:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.070322 4996 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.074884 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.087576 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.098737 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.108891 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115308 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115395 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115435 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115469 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115501 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115542 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115594 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115646 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115693 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115744 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115924 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.115984 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116102 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116195 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116236 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116270 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116304 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116336 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116349 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116370 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116407 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116441 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116473 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116485 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116485 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116528 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116506 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116597 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116627 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116659 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116690 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116722 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116758 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116790 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116813 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116819 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116855 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116874 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116704 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.116968 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117000 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117116 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117139 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117157 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117159 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117198 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117215 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117221 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117271 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117329 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117356 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117379 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117403 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117429 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117453 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117479 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117504 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117527 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117549 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117604 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117628 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117649 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117670 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117693 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117714 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117736 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117757 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117805 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117828 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117849 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117900 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117924 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117950 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.117976 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118024 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118046 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118070 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118092 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118113 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118137 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118137 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118159 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118233 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118280 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118317 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118350 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118446 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118484 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118521 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118638 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118681 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118754 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118750 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118838 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119057 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119126 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119197 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.118806 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119361 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119466 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119528 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119583 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119952 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120122 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120160 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120195 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120230 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120266 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120300 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120334 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120369 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120403 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120449 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120501 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120548 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120585 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120619 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120652 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120685 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120717 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120850 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120887 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120923 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.120991 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121052 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121086 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121118 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121150 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121183 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121215 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121248 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121287 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121321 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121353 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121384 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121416 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121446 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121477 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121513 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121547 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121581 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121613 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121644 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121676 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121713 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121746 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121782 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121819 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121896 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121931 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.121968 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122031 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122067 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122102 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122137 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122262 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122300 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122333 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122368 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122554 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122589 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122621 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122655 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122693 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122726 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122759 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122792 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122824 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122856 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122892 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122925 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.119610 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.122959 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.123107 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.123248 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.123329 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.123658 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.123763 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.123787 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.124075 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.124153 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.124214 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.124275 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.124334 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.124401 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.124622 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.124683 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.126074 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.126995 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: E0228 09:01:55.127447 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:01:55.627417553 +0000 UTC m=+79.318220374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.127525 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.127640 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.128109 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.128572 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.129154 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.129340 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.129412 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.128534 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.129958 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.130450 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.129538 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.131262 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.131749 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.131761 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.131952 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.132052 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.132259 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.132451 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.132622 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.132826 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.132881 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133088 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133245 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133470 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133502 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133506 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133524 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133818 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133960 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.133957 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.134054 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.134256 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.134332 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.134360 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.134530 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.134655 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.134823 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135025 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135039 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135142 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135181 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135564 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135717 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135767 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135782 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135808 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.135822 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136164 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136187 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136500 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136486 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136645 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136750 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136687 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136828 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.136967 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.137119 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.137270 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.137533 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.137550 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.137905 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.138338 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.139063 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.139067 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.139293 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.139486 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.140306 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.140462 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.140528 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.140627 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.141916 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.141048 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.141359 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.141408 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.141453 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.141480 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.141786 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.142147 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.142281 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.142488 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.142471 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.142486 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.142540 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.142849 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.142898 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.143210 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.143211 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.143363 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.144023 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.144213 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.144300 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.144381 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.144549 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.144570 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.143886 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.145095 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.145209 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.145232 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.145505 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.145620 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.145658 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.145632 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.146104 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.147174 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.147293 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.147398 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.147420 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.150099 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.150489 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.150502 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.150539 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.150604 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.150807 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.151059 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.151131 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.151160 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.151242 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.151502 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.151567 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.151843 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.154395 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:55 crc kubenswrapper[4996]: I0228 09:01:55.154410 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.127189 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.127449 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:55.144487 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:55.124739 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.128080 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.131471 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.131657 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.131806 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.131982 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.132198 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.132382 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.132538 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.132675 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.132825 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133042 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133264 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133457 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133637 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133827 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.134050 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.134260 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.134417 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.134562 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.134763 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.134919 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.135122 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.135321 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.135477 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.135669 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.135812 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.135945 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.136127 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.136305 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.136471 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.136603 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.136789 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.136961 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.137160 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.137321 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.137465 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.137609 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.137779 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.137920 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.138126 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.138296 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.138510 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.138697 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.138861 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139049 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139335 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139489 4996 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139616 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139737 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139865 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139990 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140193 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140376 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140499 4996 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140622 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140743 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140872 4996 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140999 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141207 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141355 4996 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141501 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141662 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141812 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141975 4996 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.142212 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.142375 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.142539 4996 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.142676 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.142798 4996 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143547 4996 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143708 4996 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143970 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144182 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144361 4996 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144493 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144607 4996 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144728 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143441 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144834 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144845 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144861 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144872 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144908 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143440 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.132608 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144966 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143926 4996 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144757 4996 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.146268 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.146386 4996 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.147301 4996 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.147489 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.147619 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.148519 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.148728 4996 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.148901 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.149074 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150458 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150499 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150515 4996 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150529 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150545 4996 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150557 4996 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150570 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150583 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150595 4996 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150608 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150621 4996 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150633 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150645 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150659 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150672 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150686 4996 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150700 4996 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150711 4996 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150723 4996 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150735 4996 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150747 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150758 4996 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150772 4996 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150784 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150797 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150810 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150823 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150836 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150847 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150859 4996 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150871 4996 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150883 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150895 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150908 4996 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150920 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150932 4996 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150944 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150956 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150969 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150981 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.150994 4996 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151123 4996 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151140 4996 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151153 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151165 4996 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151179 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151193 4996 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151205 4996 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151217 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151228 4996 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151242 4996 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151256 4996 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151268 4996 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151279 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151292 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151304 4996 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151317 4996 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151330 4996 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151341 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151354 4996 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151365 4996 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151378 4996 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151389 4996 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151401 4996 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151413 4996 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151425 4996 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151436 4996 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151447 4996 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151459 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151471 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151483 4996 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151495 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151506 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151518 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151530 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151541 4996 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151554 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151566 4996 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151580 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151592 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151604 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151616 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151628 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151639 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151652 4996 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151663 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151675 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151687 4996 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151700 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151713 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151725 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151739 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151753 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151765 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151776 4996 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151788 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151800 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151814 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151825 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151837 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151849 4996 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151861 4996 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151873 4996 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151888 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151903 4996 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151922 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151937 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151948 4996 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151959 4996 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151971 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.147646 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.155855 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.155883 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.155904 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.155977 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:56.65595341 +0000 UTC m=+80.346756281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.160101 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144461 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.132875 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.161985 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133237 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133356 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133556 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133635 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.133723 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.138851 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.138933 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139215 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139428 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139862 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.139901 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140108 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140464 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140530 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140736 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.140771 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141025 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141079 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141334 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141655 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.141977 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.142146 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.142291 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143034 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143495 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.143577 4996 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.143624 4996 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.143967 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144234 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144464 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144657 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144870 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.144911 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.147927 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.151150 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.162101 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.162233 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.162238 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:56.662204998 +0000 UTC m=+80.353007899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.162264 4996 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.162264 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.162281 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:56.662257009 +0000 UTC m=+80.353059840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.162316 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:56.66230384 +0000 UTC m=+80.353106741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.164915 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.165456 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.166196 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.166795 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.166853 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.168228 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.174462 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.224514 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.247805 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.248203 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.248245 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.248267 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.248282 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253235 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253348 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253402 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.253422 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:01:57.25339856 +0000 UTC m=+80.944201391 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253453 4996 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253470 4996 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253482 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253488 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253495 4996 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253509 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253522 4996 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253534 4996 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253546 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253558 4996 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253543 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253571 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253651 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253673 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253695 4996 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253713 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253733 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253753 4996 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253772 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253789 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253805 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253822 4996 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253839 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253855 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253872 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253890 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253906 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253922 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253943 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253962 4996 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253980 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.253996 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254046 4996 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254070 4996 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254093 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254115 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254138 4996 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254159 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254180 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254206 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254229 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254253 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254277 4996 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254298 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.254319 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.351936 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.351978 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.351989 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.352026 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.352040 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.455798 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.455873 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.455897 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.455925 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.455947 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.512116 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 09:01:56 crc kubenswrapper[4996]: W0228 09:01:56.530683 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-48660cae354b5d819207ec8da45e8f7f49ceda7d42351d2120abe2f6111ab001 WatchSource:0}: Error finding container 48660cae354b5d819207ec8da45e8f7f49ceda7d42351d2120abe2f6111ab001: Status 404 returned error can't find the container with id 48660cae354b5d819207ec8da45e8f7f49ceda7d42351d2120abe2f6111ab001 Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.534178 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 09:01:56 crc kubenswrapper[4996]: W0228 09:01:56.553678 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-3af4f065d79e2021d08ad791ab9bfa23c821a2eb5282a31e10da96230721d683 WatchSource:0}: Error finding container 3af4f065d79e2021d08ad791ab9bfa23c821a2eb5282a31e10da96230721d683: Status 404 returned error can't find the container with id 3af4f065d79e2021d08ad791ab9bfa23c821a2eb5282a31e10da96230721d683 Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.558617 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.558671 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.558691 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.558714 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.558731 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.658095 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.658351 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.658404 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.658424 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.658528 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:57.658490967 +0000 UTC m=+81.349293808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.661097 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.661296 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.661324 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.661396 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.661434 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.758633 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.758687 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.758718 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.758824 4996 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.758841 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.758850 4996 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.758868 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.758882 4996 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.758927 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:57.758890079 +0000 UTC m=+81.449692900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.758948 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:57.75893893 +0000 UTC m=+81.449741751 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:01:56 crc kubenswrapper[4996]: E0228 09:01:56.758961 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:57.75895461 +0000 UTC m=+81.449757431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.765742 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.765807 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.765821 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.765838 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.765850 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.869142 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.869216 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.869241 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.869269 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.869288 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.971959 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.972001 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.972026 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.972039 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:56 crc kubenswrapper[4996]: I0228 09:01:56.972048 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:56Z","lastTransitionTime":"2026-02-28T09:01:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.032135 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.032252 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.032441 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.032495 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.032651 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.032801 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.039468 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.041372 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.044862 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.046536 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.049592 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.052195 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.052249 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.053847 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.055812 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.057066 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.058658 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.059520 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.061263 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.061986 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.062881 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.064410 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.065199 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.066592 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.067184 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.068033 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.069595 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.069713 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.070361 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.071615 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.072317 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.073721 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.074379 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.074912 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.074981 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.075031 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.075063 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.075082 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.075170 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.076564 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.077371 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.078957 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.079666 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.081224 4996 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.081379 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.085281 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.086454 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.087080 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.088829 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.089675 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.090794 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.090908 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.091598 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.092749 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.093250 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.094335 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.094935 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.095896 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.096414 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.097338 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.098318 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.099223 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.100130 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.100660 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.101500 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.102097 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.102620 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.103434 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.109701 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.128436 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.150200 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.161630 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3af4f065d79e2021d08ad791ab9bfa23c821a2eb5282a31e10da96230721d683"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.163064 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.163099 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"48660cae354b5d819207ec8da45e8f7f49ceda7d42351d2120abe2f6111ab001"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.164439 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.164471 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.164486 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"01c9d82611680199e8c6efaafc0a9369a4f8fa46fc5caa5d78f6214f5ceb6c2c"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.177919 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.178124 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.178151 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.178162 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.178179 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.178192 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.195079 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.208399 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.225257 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.240262 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.257450 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.263740 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.263950 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:01:59.263920207 +0000 UTC m=+82.954723058 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.273127 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.280530 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.280562 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.280573 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.280588 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.280600 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.288842 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.302596 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.320904 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.336619 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.349538 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.383122 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.383191 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.383211 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.383236 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.383254 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.486152 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.486195 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.486210 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.486254 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.486271 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.589538 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.589594 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.589619 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.589651 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.589673 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.624814 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.624893 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.624916 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.624949 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.624971 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.651164 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.656460 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.656513 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.656531 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.656558 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.656577 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.667272 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.667473 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.667507 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.667527 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.667601 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:59.66757819 +0000 UTC m=+83.358381031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.681421 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.687671 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.687772 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.687828 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.687852 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.687869 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.708937 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.714340 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.714453 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.714514 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.714539 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.714558 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.736547 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.742619 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.742686 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.742703 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.742727 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.742745 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.766537 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:01:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.766700 4996 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.767634 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.767680 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.767706 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.767791 4996 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.767849 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:59.767833609 +0000 UTC m=+83.458636440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.767866 4996 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.767962 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:59.767936101 +0000 UTC m=+83.458738952 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.767872 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.768032 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.768054 4996 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:57 crc kubenswrapper[4996]: E0228 09:01:57.768101 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 09:01:59.768087944 +0000 UTC m=+83.458890785 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.768674 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.768732 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.768751 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.768772 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.768790 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.872272 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.872346 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.872364 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.872389 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.872409 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.975267 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.975333 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.975350 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.975380 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:57 crc kubenswrapper[4996]: I0228 09:01:57.975399 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:57Z","lastTransitionTime":"2026-02-28T09:01:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.078138 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.078662 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.078676 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.078698 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.078711 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.180991 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.181110 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.181130 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.181157 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.181175 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.284521 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.284578 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.284589 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.284609 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.284623 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.387961 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.388041 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.388057 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.388085 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.388103 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.491686 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.491745 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.491755 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.491779 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.491792 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.594436 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.594525 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.594551 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.594581 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.594602 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.697530 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.697581 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.697599 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.697622 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.697640 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.799994 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.800076 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.800095 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.800124 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.800142 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.902700 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.902769 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.902788 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.902836 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:58 crc kubenswrapper[4996]: I0228 09:01:58.902855 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:58Z","lastTransitionTime":"2026-02-28T09:01:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.006132 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.006203 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.006225 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.006254 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.006276 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.032843 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.032936 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.032875 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.033125 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.033248 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.033465 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.109105 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.109235 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.109255 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.109281 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.109303 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.211934 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.211993 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.212014 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.212029 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.212039 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.283121 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.283340 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:03.283307292 +0000 UTC m=+86.974110143 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.314806 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.314854 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.314866 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.314883 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.314897 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.417189 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.417262 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.417279 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.417303 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.417320 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.519856 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.519901 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.519915 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.519931 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.519969 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.623062 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.623101 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.623111 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.623127 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.623138 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.687449 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.687717 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.687775 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.687799 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.687899 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:03.687870107 +0000 UTC m=+87.378673088 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.726073 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.726800 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.726831 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.726853 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.726866 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.788252 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.788333 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.788385 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.788465 4996 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.788555 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:03.788536694 +0000 UTC m=+87.479339505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.788556 4996 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.788567 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.788718 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.788751 4996 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.788646 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:03.788623606 +0000 UTC m=+87.479426447 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:01:59 crc kubenswrapper[4996]: E0228 09:01:59.788847 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:03.788820691 +0000 UTC m=+87.479623602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.829473 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.829532 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.829550 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.829572 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.829590 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.932022 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.932094 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.932110 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.932136 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:01:59 crc kubenswrapper[4996]: I0228 09:01:59.932154 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:01:59Z","lastTransitionTime":"2026-02-28T09:01:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.034964 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.035057 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.035077 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.035099 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.035117 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.137613 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.137702 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.137729 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.137781 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.137824 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.240521 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.240585 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.240604 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.240627 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.240645 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.343348 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.343469 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.343504 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.343540 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.343565 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.446923 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.446982 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.446992 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.447044 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.447057 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.550174 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.550239 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.550256 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.550282 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.550300 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.653524 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.653600 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.653627 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.653657 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.653679 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.756334 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.756367 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.756375 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.756387 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.756400 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.859057 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.859106 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.859118 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.859137 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.859162 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.962123 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.962197 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.962227 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.962259 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:00 crc kubenswrapper[4996]: I0228 09:02:00.962281 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:00Z","lastTransitionTime":"2026-02-28T09:02:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.032856 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.032889 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:01 crc kubenswrapper[4996]: E0228 09:02:01.033111 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.033136 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:01 crc kubenswrapper[4996]: E0228 09:02:01.033180 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:01 crc kubenswrapper[4996]: E0228 09:02:01.033346 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.065277 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.065339 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.065355 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.065380 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.065398 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.168677 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.168751 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.168772 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.168799 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.168821 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.177573 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.203144 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.225873 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.242652 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.267659 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.271318 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.271349 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.271360 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.271372 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.271382 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.285124 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.303122 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.374459 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.374511 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.374533 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.374558 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.374577 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.477838 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.477946 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.477976 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.478047 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.478074 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.580884 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.580953 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.580970 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.580994 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.581051 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.683639 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.683697 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.683714 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.683738 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.683756 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.743192 4996 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.787489 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.787538 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.787555 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.787580 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.787598 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.890292 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.890366 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.890389 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.890418 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.890440 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.992606 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.992666 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.992683 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.992704 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:01 crc kubenswrapper[4996]: I0228 09:02:01.992721 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:01Z","lastTransitionTime":"2026-02-28T09:02:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.095515 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.095582 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.095600 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.095625 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.095644 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.197926 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.197964 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.197981 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.197996 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.198028 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.301095 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.301169 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.301192 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.301224 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.301249 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.404288 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.404344 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.404360 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.404381 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.404399 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.507750 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.507798 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.507809 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.507825 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.507838 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.610527 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.610583 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.610601 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.610624 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.610643 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.713844 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.713883 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.713899 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.713922 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.713938 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.817886 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.817960 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.817980 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.818058 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.818076 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.920965 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.921021 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.921034 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.921050 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:02 crc kubenswrapper[4996]: I0228 09:02:02.921061 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:02Z","lastTransitionTime":"2026-02-28T09:02:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.024180 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.024244 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.024263 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.024291 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.024315 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.033190 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.033309 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.033348 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.033387 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.033476 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.033635 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.127752 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.127876 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.127906 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.127936 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.127958 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.231186 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.231237 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.231254 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.231276 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.231294 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.334745 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.334791 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.334808 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.334834 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.334851 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.355395 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.355613 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:11.355589729 +0000 UTC m=+95.046392570 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.438217 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.438256 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.438265 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.438280 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.438289 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.541735 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.541799 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.541818 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.541852 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.541876 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.645302 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.645340 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.645351 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.645366 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.645380 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.748337 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.748414 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.748424 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.748439 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.748448 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.760049 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.760223 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.760256 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.760275 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.760353 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:11.760330278 +0000 UTC m=+95.451133129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.851831 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.851931 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.851949 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.851977 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.851996 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.861290 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.861340 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.861375 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.861490 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.861505 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.861515 4996 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.861510 4996 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.861572 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:11.861558879 +0000 UTC m=+95.552361690 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.861631 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:11.86159869 +0000 UTC m=+95.552401561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.861510 4996 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:02:03 crc kubenswrapper[4996]: E0228 09:02:03.861709 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:11.861690432 +0000 UTC m=+95.552493393 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.955624 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.955684 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.955869 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.955904 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:03 crc kubenswrapper[4996]: I0228 09:02:03.955928 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:03Z","lastTransitionTime":"2026-02-28T09:02:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.062218 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.062311 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.062335 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.062367 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.062393 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.165472 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.165530 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.165547 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.165571 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.165591 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.268132 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.268196 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.268214 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.268246 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.268269 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.371558 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.371628 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.371647 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.371671 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.371689 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.475283 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.475360 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.475382 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.475408 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.475428 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.578730 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.578798 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.578818 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.578842 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.578859 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.681977 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.682098 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.682119 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.682155 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.682176 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.787998 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.788159 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.788188 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.788264 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.788293 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.891417 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.891486 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.891510 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.891544 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.891565 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.994877 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.994948 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.994966 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.994990 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:04 crc kubenswrapper[4996]: I0228 09:02:04.995036 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:04Z","lastTransitionTime":"2026-02-28T09:02:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.032517 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.032530 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.032742 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:05 crc kubenswrapper[4996]: E0228 09:02:05.032844 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:05 crc kubenswrapper[4996]: E0228 09:02:05.033038 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:05 crc kubenswrapper[4996]: E0228 09:02:05.033180 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.097864 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.097913 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.097925 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.097941 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.097954 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.200959 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.201028 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.201043 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.201068 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.201084 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.303856 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.303910 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.303918 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.303935 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.303950 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.406209 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.407129 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.407162 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.407190 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.407289 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.509967 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.510053 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.510073 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.510097 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.510120 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.612716 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.612769 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.612783 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.612797 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.612806 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.716142 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.716198 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.716209 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.716225 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.716235 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.819894 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.819942 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.819954 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.819973 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.819986 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.923481 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.923537 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.923546 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.923563 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:05 crc kubenswrapper[4996]: I0228 09:02:05.923575 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:05Z","lastTransitionTime":"2026-02-28T09:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.026250 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.026340 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.026377 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.026413 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.026437 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.129222 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.129286 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.129304 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.129331 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.129349 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.232886 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.232958 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.232977 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.233001 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.233062 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.337425 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.337560 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.337586 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.337618 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.337636 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.440389 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.440466 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.440490 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.440523 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.440550 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.543606 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.543663 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.543683 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.543715 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.543734 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.646029 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.646065 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.646078 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.646095 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.646110 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.749275 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.749317 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.749330 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.749347 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.749362 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.852657 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.852742 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.852766 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.852810 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.852837 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.955919 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.955991 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.956048 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.956082 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:06 crc kubenswrapper[4996]: I0228 09:02:06.956103 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:06Z","lastTransitionTime":"2026-02-28T09:02:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.032646 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.032655 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.032842 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:07 crc kubenswrapper[4996]: E0228 09:02:07.032910 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:07 crc kubenswrapper[4996]: E0228 09:02:07.033205 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:07 crc kubenswrapper[4996]: E0228 09:02:07.033472 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.049311 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:07Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.052942 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.054245 4996 scope.go:117] "RemoveContainer" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" Feb 28 09:02:07 crc kubenswrapper[4996]: E0228 09:02:07.054485 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.059994 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.060095 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.060116 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.060146 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.060168 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.071416 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:07Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.092661 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:07Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.112725 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:07Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.126704 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:07Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.140262 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:07Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.162823 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.162871 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.162884 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.162906 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.162921 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.196060 4996 scope.go:117] "RemoveContainer" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" Feb 28 09:02:07 crc kubenswrapper[4996]: E0228 09:02:07.196333 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.266067 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.266135 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.266152 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.266172 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.266186 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.368440 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.368528 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.368554 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.368589 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.368617 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.471892 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.471961 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.471982 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.472044 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.472064 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.574334 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.574381 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.574398 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.574422 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.574439 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.678497 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.678569 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.678596 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.678630 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.678650 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.782149 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.782193 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.782206 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.782222 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.782234 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.885396 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.885454 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.885477 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.885507 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.885525 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.999730 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.999860 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.999888 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:07 crc kubenswrapper[4996]: I0228 09:02:07.999917 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:07.999939 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:07Z","lastTransitionTime":"2026-02-28T09:02:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.082259 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.082338 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.082349 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.082369 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.082379 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: E0228 09:02:08.107087 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:08Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.112092 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.112199 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.112225 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.112253 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.112273 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: E0228 09:02:08.135196 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:08Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.140381 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.140448 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.140467 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.140493 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.140512 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: E0228 09:02:08.162077 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:08Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.167148 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.167187 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.167198 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.167214 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.167224 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: E0228 09:02:08.188727 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:08Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.193437 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.193492 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.193510 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.193545 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.193563 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: E0228 09:02:08.215059 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:08Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:08 crc kubenswrapper[4996]: E0228 09:02:08.215227 4996 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.217703 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.217820 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.217845 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.217869 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.217887 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.320224 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.320259 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.320275 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.320297 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.320310 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.423670 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.423742 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.423765 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.423794 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.423817 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.526951 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.526997 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.527025 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.527039 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.527049 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.630342 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.630421 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.630447 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.630480 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.630502 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.732799 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.732858 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.732875 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.732900 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.732923 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.836149 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.836202 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.836347 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.836383 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.836403 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.939810 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.939864 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.939880 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.939904 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:08 crc kubenswrapper[4996]: I0228 09:02:08.939925 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:08Z","lastTransitionTime":"2026-02-28T09:02:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.033147 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.033234 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.033153 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:09 crc kubenswrapper[4996]: E0228 09:02:09.033350 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:09 crc kubenswrapper[4996]: E0228 09:02:09.033475 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:09 crc kubenswrapper[4996]: E0228 09:02:09.033653 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.043505 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.043545 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.043553 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.043568 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.043577 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.146336 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.146394 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.146412 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.146434 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.146452 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.249497 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.249543 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.249554 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.249568 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.249585 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.352160 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.352312 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.352334 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.352357 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.352375 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.454772 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.454829 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.454845 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.454949 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.455046 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.557058 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.557098 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.557109 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.557125 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.557137 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.660156 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.660209 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.660220 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.660239 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.660258 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.763143 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.763207 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.763220 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.763240 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.763257 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.866767 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.866843 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.866867 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.866894 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.866915 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.970286 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.970367 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.970388 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.970416 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:09 crc kubenswrapper[4996]: I0228 09:02:09.970433 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:09Z","lastTransitionTime":"2026-02-28T09:02:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.054774 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.072658 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.072701 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.072718 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.072741 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.072758 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.175378 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.175537 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.175569 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.175599 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.175623 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.278355 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.278398 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.278407 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.278422 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.278432 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.381505 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.381594 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.381613 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.381639 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.381658 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.484154 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.484221 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.484242 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.484304 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.484353 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.587349 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.587398 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.587418 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.587447 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.587468 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.690135 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.690201 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.690220 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.690245 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.690263 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.774508 4996 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.794342 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.794379 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.794389 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.794402 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.794412 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.896661 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.896698 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.896726 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.896751 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:10 crc kubenswrapper[4996]: I0228 09:02:10.896768 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:10Z","lastTransitionTime":"2026-02-28T09:02:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:10.999991 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.000094 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.000120 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.000152 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.000177 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.032830 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.032832 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.033083 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.033099 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.032846 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.033163 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.103901 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.103970 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.103992 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.104053 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.104072 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.209736 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.209789 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.209801 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.209820 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.209832 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.312270 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.312315 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.312325 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.312340 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.312350 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.415473 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.415515 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.415526 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.415543 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.415555 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.445819 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.445982 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:27.445957782 +0000 UTC m=+111.136760603 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.508381 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vsncw"] Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.508996 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vsncw" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.512168 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.512452 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.512681 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.524863 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.524920 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.524945 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.524970 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.524988 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.538423 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.546434 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4c160bed-3a16-439b-b4b7-130d2cba6252-hosts-file\") pod \"node-resolver-vsncw\" (UID: \"4c160bed-3a16-439b-b4b7-130d2cba6252\") " pod="openshift-dns/node-resolver-vsncw" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.546497 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5kpj\" (UniqueName: \"kubernetes.io/projected/4c160bed-3a16-439b-b4b7-130d2cba6252-kube-api-access-x5kpj\") pod \"node-resolver-vsncw\" (UID: \"4c160bed-3a16-439b-b4b7-130d2cba6252\") " pod="openshift-dns/node-resolver-vsncw" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.560104 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.579134 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.598838 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.620482 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.631938 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.631984 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.632001 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.632044 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.632063 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.647508 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4c160bed-3a16-439b-b4b7-130d2cba6252-hosts-file\") pod \"node-resolver-vsncw\" (UID: \"4c160bed-3a16-439b-b4b7-130d2cba6252\") " pod="openshift-dns/node-resolver-vsncw" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.647557 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5kpj\" (UniqueName: \"kubernetes.io/projected/4c160bed-3a16-439b-b4b7-130d2cba6252-kube-api-access-x5kpj\") pod \"node-resolver-vsncw\" (UID: \"4c160bed-3a16-439b-b4b7-130d2cba6252\") " pod="openshift-dns/node-resolver-vsncw" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.647879 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4c160bed-3a16-439b-b4b7-130d2cba6252-hosts-file\") pod \"node-resolver-vsncw\" (UID: \"4c160bed-3a16-439b-b4b7-130d2cba6252\") " pod="openshift-dns/node-resolver-vsncw" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.654286 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.672786 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.680737 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5kpj\" (UniqueName: \"kubernetes.io/projected/4c160bed-3a16-439b-b4b7-130d2cba6252-kube-api-access-x5kpj\") pod \"node-resolver-vsncw\" (UID: \"4c160bed-3a16-439b-b4b7-130d2cba6252\") " pod="openshift-dns/node-resolver-vsncw" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.695810 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.714849 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.735281 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.735325 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.735341 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.735363 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.735380 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.838327 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.838379 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.838396 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.838419 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.838441 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.840229 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vsncw" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.850232 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.850355 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.850376 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.850389 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.850439 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:27.850424335 +0000 UTC m=+111.541227156 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:11 crc kubenswrapper[4996]: W0228 09:02:11.853792 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c160bed_3a16_439b_b4b7_130d2cba6252.slice/crio-77f95776f7c9ed7b4809dc8fca03301ab31eafa258a803c11858bca5f682fb30 WatchSource:0}: Error finding container 77f95776f7c9ed7b4809dc8fca03301ab31eafa258a803c11858bca5f682fb30: Status 404 returned error can't find the container with id 77f95776f7c9ed7b4809dc8fca03301ab31eafa258a803c11858bca5f682fb30 Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.897403 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-snglm"] Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.898584 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-ddgnd"] Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.899039 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.899759 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:11 crc kubenswrapper[4996]: W0228 09:02:11.900435 4996 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.900473 4996 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 28 09:02:11 crc kubenswrapper[4996]: W0228 09:02:11.900901 4996 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.900926 4996 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.901300 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.903164 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.903424 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.904927 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.905374 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.905949 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jg4sj"] Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.906590 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.908965 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.909259 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.909489 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.909633 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.909807 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.924570 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.940577 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.944573 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.944599 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.944609 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.944622 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.944631 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:11Z","lastTransitionTime":"2026-02-28T09:02:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.950621 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-conf-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.950954 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a98c14ee-40d6-4e30-9390-154743a75c63-rootfs\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.951212 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-daemon-config\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.951375 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-socket-dir-parent\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.951522 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-netns\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.951642 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-cnibin\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.951782 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.951916 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.952089 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-kubelet\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.952225 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cni-binary-copy\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.952368 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-hostroot\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.952492 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-os-release\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.952622 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-cni-multus\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.952750 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkcp\" (UniqueName: \"kubernetes.io/projected/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-kube-api-access-zwkcp\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.952903 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.953163 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-cni-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.953294 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-cni-binary-copy\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.953435 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-etc-kubernetes\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.954137 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-os-release\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.954275 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl8vk\" (UniqueName: \"kubernetes.io/projected/6ed5a0c7-4cae-4140-be04-b7a0f3899920-kube-api-access-cl8vk\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.954409 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfhll\" (UniqueName: \"kubernetes.io/projected/a98c14ee-40d6-4e30-9390-154743a75c63-kube-api-access-jfhll\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.954542 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a98c14ee-40d6-4e30-9390-154743a75c63-mcd-auth-proxy-config\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.954676 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-system-cni-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.954809 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cnibin\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.954960 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.953118 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.955146 4996 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.954878 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.955213 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.955283 4996 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.955291 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a98c14ee-40d6-4e30-9390-154743a75c63-proxy-tls\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.955387 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:27.955298123 +0000 UTC m=+111.646100954 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.956092 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:27.956069491 +0000 UTC m=+111.646872462 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.956116 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-cni-bin\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.956138 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-system-cni-dir\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.956157 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-k8s-cni-cncf-io\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.956173 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-multus-certs\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.955154 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.956216 4996 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:11 crc kubenswrapper[4996]: E0228 09:02:11.956248 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:27.956238825 +0000 UTC m=+111.647041856 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.967929 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.982567 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:11 crc kubenswrapper[4996]: I0228 09:02:11.996858 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:11Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.008192 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.019475 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.031719 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.047080 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.047384 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.047484 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.047625 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.047736 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.051797 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.057212 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cni-binary-copy\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.058461 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cni-binary-copy\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.058692 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-hostroot\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.058844 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-os-release\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.058887 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-cni-multus\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.058789 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-hostroot\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.058996 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-os-release\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.059116 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkcp\" (UniqueName: \"kubernetes.io/projected/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-kube-api-access-zwkcp\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.059113 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-cni-multus\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.059215 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-cni-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.059321 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-cni-binary-copy\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060478 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-etc-kubernetes\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060639 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-os-release\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060398 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-cni-binary-copy\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060562 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-etc-kubernetes\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060764 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl8vk\" (UniqueName: \"kubernetes.io/projected/6ed5a0c7-4cae-4140-be04-b7a0f3899920-kube-api-access-cl8vk\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060779 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-os-release\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.059411 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-cni-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060854 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfhll\" (UniqueName: \"kubernetes.io/projected/a98c14ee-40d6-4e30-9390-154743a75c63-kube-api-access-jfhll\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060893 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a98c14ee-40d6-4e30-9390-154743a75c63-mcd-auth-proxy-config\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.060961 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-system-cni-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.061147 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-system-cni-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.061352 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cnibin\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.061437 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cnibin\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.061543 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a98c14ee-40d6-4e30-9390-154743a75c63-proxy-tls\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.061729 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-cni-bin\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062503 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-cni-bin\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062571 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-system-cni-dir\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062614 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-k8s-cni-cncf-io\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062657 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-multus-certs\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062694 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-conf-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062704 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-system-cni-dir\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062728 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a98c14ee-40d6-4e30-9390-154743a75c63-rootfs\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062759 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-daemon-config\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062791 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-k8s-cni-cncf-io\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062791 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-socket-dir-parent\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062843 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-netns\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062875 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-cnibin\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062900 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062929 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062943 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-socket-dir-parent\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062968 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-kubelet\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.063056 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-var-lib-kubelet\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.063061 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-conf-dir\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.063095 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a98c14ee-40d6-4e30-9390-154743a75c63-rootfs\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.063112 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-netns\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.063169 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-cnibin\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.062761 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6ed5a0c7-4cae-4140-be04-b7a0f3899920-host-run-multus-certs\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.063876 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.064185 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.072788 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a98c14ee-40d6-4e30-9390-154743a75c63-mcd-auth-proxy-config\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.075126 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a98c14ee-40d6-4e30-9390-154743a75c63-proxy-tls\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.075470 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.076233 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfhll\" (UniqueName: \"kubernetes.io/projected/a98c14ee-40d6-4e30-9390-154743a75c63-kube-api-access-jfhll\") pod \"machine-config-daemon-jg4sj\" (UID: \"a98c14ee-40d6-4e30-9390-154743a75c63\") " pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.095369 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.116020 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.133883 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.149529 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.149573 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.149582 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.149599 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.149611 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.156550 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.177040 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.194955 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.213310 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.217894 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vsncw" event={"ID":"4c160bed-3a16-439b-b4b7-130d2cba6252","Type":"ContainerStarted","Data":"33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.217955 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vsncw" event={"ID":"4c160bed-3a16-439b-b4b7-130d2cba6252","Type":"ContainerStarted","Data":"77f95776f7c9ed7b4809dc8fca03301ab31eafa258a803c11858bca5f682fb30"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.227025 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.245728 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.247889 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.252657 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.252695 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.252706 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.252725 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.252737 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.266319 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: W0228 09:02:12.267109 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda98c14ee_40d6_4e30_9390_154743a75c63.slice/crio-ebcbcbd2eb010d6688b074e0b6b9ce34fa3ff56d06291a7ff364ff085fc03e01 WatchSource:0}: Error finding container ebcbcbd2eb010d6688b074e0b6b9ce34fa3ff56d06291a7ff364ff085fc03e01: Status 404 returned error can't find the container with id ebcbcbd2eb010d6688b074e0b6b9ce34fa3ff56d06291a7ff364ff085fc03e01 Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.284031 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.301197 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjj82"] Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.302084 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.305373 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.306376 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.306705 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.306848 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.306836 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.306970 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.308814 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.313797 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.331457 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.345563 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.356838 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.356910 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.356930 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.356958 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.356978 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.365400 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-slash\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.365491 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-etc-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.365528 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-bin\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.365567 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-ovn\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.365606 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-systemd\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.365644 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-config\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.365684 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovn-node-metrics-cert\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.365722 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-netns\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366044 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-systemd-units\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366234 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-netd\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366327 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-var-lib-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366425 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-script-lib\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366509 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-node-log\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366621 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-kubelet\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366670 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-env-overrides\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366707 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2k5h\" (UniqueName: \"kubernetes.io/projected/6730cd9d-a0be-4a00-966e-f936e7b888b6-kube-api-access-s2k5h\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.366891 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.367070 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.367208 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.367286 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-log-socket\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.371462 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.388426 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.402389 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.422176 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.433469 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.444392 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.455130 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.466747 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.466808 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.466824 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.466842 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.466853 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.467792 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-ovn\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.467851 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-systemd\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.467889 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-config\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.467908 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-ovn\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.467923 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovn-node-metrics-cert\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.467980 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-netns\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.467908 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-systemd\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468049 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-systemd-units\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468166 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-netns\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468192 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-systemd-units\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468196 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-netd\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468319 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-var-lib-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468346 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-script-lib\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468352 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-netd\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468370 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-node-log\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468400 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-node-log\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468417 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-kubelet\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468428 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-var-lib-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468458 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468494 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468528 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-env-overrides\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468560 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2k5h\" (UniqueName: \"kubernetes.io/projected/6730cd9d-a0be-4a00-966e-f936e7b888b6-kube-api-access-s2k5h\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468630 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468652 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-config\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468674 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-log-socket\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468698 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468724 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-kubelet\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468737 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-slash\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468752 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-ovn-kubernetes\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468769 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-etc-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468788 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468799 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-bin\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468892 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-bin\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468952 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-script-lib\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.468963 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-log-socket\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.469036 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-slash\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.469075 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-etc-openvswitch\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.469126 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-env-overrides\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.472673 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovn-node-metrics-cert\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.480206 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.487360 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2k5h\" (UniqueName: \"kubernetes.io/projected/6730cd9d-a0be-4a00-966e-f936e7b888b6-kube-api-access-s2k5h\") pod \"ovnkube-node-hjj82\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.496063 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.515219 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.529251 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.540235 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.552783 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.562104 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.570687 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.570757 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.570776 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.570805 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.570824 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.574225 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.588278 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.603443 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.623242 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.628896 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:12 crc kubenswrapper[4996]: W0228 09:02:12.638310 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6730cd9d_a0be_4a00_966e_f936e7b888b6.slice/crio-8aa6eb47ea63b41f30ca16fdb385ac84005b929603266e6dcf90fafdbd2ac4ab WatchSource:0}: Error finding container 8aa6eb47ea63b41f30ca16fdb385ac84005b929603266e6dcf90fafdbd2ac4ab: Status 404 returned error can't find the container with id 8aa6eb47ea63b41f30ca16fdb385ac84005b929603266e6dcf90fafdbd2ac4ab Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.645131 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.665411 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.673052 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.673084 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.673095 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.673109 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.673118 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.696322 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.721904 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:12Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.775584 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.775621 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.775634 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.775652 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.775665 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.878305 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.878387 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.878400 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.878420 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.878435 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.980731 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.980779 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.980791 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.980811 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:12 crc kubenswrapper[4996]: I0228 09:02:12.980826 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:12Z","lastTransitionTime":"2026-02-28T09:02:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.049550 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.050260 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.050396 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.050557 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.050262 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.051377 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.064149 4996 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.064216 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-daemon-config podName:6ed5a0c7-4cae-4140-be04-b7a0f3899920 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:13.564200023 +0000 UTC m=+97.255002824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-daemon-config") pod "multus-snglm" (UID: "6ed5a0c7-4cae-4140-be04-b7a0f3899920") : failed to sync configmap cache: timed out waiting for the condition Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.078782 4996 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.078797 4996 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.078817 4996 projected.go:194] Error preparing data for projected volume kube-api-access-zwkcp for pod openshift-multus/multus-additional-cni-plugins-ddgnd: failed to sync configmap cache: timed out waiting for the condition Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.078820 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cl8vk for pod openshift-multus/multus-snglm: failed to sync configmap cache: timed out waiting for the condition Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.078860 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-kube-api-access-zwkcp podName:9e7ef261-12c5-4b48-b5e1-32dcaf0f4277 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:13.578848632 +0000 UTC m=+97.269651443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zwkcp" (UniqueName: "kubernetes.io/projected/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-kube-api-access-zwkcp") pod "multus-additional-cni-plugins-ddgnd" (UID: "9e7ef261-12c5-4b48-b5e1-32dcaf0f4277") : failed to sync configmap cache: timed out waiting for the condition Feb 28 09:02:13 crc kubenswrapper[4996]: E0228 09:02:13.078873 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ed5a0c7-4cae-4140-be04-b7a0f3899920-kube-api-access-cl8vk podName:6ed5a0c7-4cae-4140-be04-b7a0f3899920 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:13.578866482 +0000 UTC m=+97.269669293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cl8vk" (UniqueName: "kubernetes.io/projected/6ed5a0c7-4cae-4140-be04-b7a0f3899920-kube-api-access-cl8vk") pod "multus-snglm" (UID: "6ed5a0c7-4cae-4140-be04-b7a0f3899920") : failed to sync configmap cache: timed out waiting for the condition Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.083365 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.083444 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.083470 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.083503 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.083529 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.187173 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.187214 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.187233 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.187253 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.187269 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.222142 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49" exitCode=0 Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.222223 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.222258 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"8aa6eb47ea63b41f30ca16fdb385ac84005b929603266e6dcf90fafdbd2ac4ab"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.224085 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.224118 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.224129 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"ebcbcbd2eb010d6688b074e0b6b9ce34fa3ff56d06291a7ff364ff085fc03e01"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.242214 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.257695 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.281469 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.290266 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.290346 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.290377 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.290418 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.290450 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.293597 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.311478 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.323059 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.347589 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.357483 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.363744 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.392558 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.392620 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.392632 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.392657 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.392668 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.392895 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.408730 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.409426 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.425244 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.439654 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.453360 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:13Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.495843 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.495903 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.495915 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.495932 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.495952 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.585074 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl8vk\" (UniqueName: \"kubernetes.io/projected/6ed5a0c7-4cae-4140-be04-b7a0f3899920-kube-api-access-cl8vk\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.585199 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-daemon-config\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.585258 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwkcp\" (UniqueName: \"kubernetes.io/projected/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-kube-api-access-zwkcp\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.586477 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6ed5a0c7-4cae-4140-be04-b7a0f3899920-multus-daemon-config\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.589264 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl8vk\" (UniqueName: \"kubernetes.io/projected/6ed5a0c7-4cae-4140-be04-b7a0f3899920-kube-api-access-cl8vk\") pod \"multus-snglm\" (UID: \"6ed5a0c7-4cae-4140-be04-b7a0f3899920\") " pod="openshift-multus/multus-snglm" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.589913 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwkcp\" (UniqueName: \"kubernetes.io/projected/9e7ef261-12c5-4b48-b5e1-32dcaf0f4277-kube-api-access-zwkcp\") pod \"multus-additional-cni-plugins-ddgnd\" (UID: \"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\") " pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.597989 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.598074 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.598091 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.598114 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.598130 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.699990 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.700055 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.700070 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.700086 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.700098 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.725045 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-snglm" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.737155 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" Feb 28 09:02:13 crc kubenswrapper[4996]: W0228 09:02:13.766191 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e7ef261_12c5_4b48_b5e1_32dcaf0f4277.slice/crio-52ac13738bcc44eb2c626aac495ee988e42e40fb4197eeac5b06607395f21532 WatchSource:0}: Error finding container 52ac13738bcc44eb2c626aac495ee988e42e40fb4197eeac5b06607395f21532: Status 404 returned error can't find the container with id 52ac13738bcc44eb2c626aac495ee988e42e40fb4197eeac5b06607395f21532 Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.804809 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.804934 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.805001 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.805103 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.805199 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.907784 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.908076 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.908088 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.908104 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:13 crc kubenswrapper[4996]: I0228 09:02:13.908115 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:13Z","lastTransitionTime":"2026-02-28T09:02:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.011206 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.011280 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.011306 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.011338 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.011419 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.113736 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.113770 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.113781 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.113795 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.113822 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.215821 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.215858 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.215868 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.215885 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.215896 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.232262 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-snglm" event={"ID":"6ed5a0c7-4cae-4140-be04-b7a0f3899920","Type":"ContainerStarted","Data":"18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.232314 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-snglm" event={"ID":"6ed5a0c7-4cae-4140-be04-b7a0f3899920","Type":"ContainerStarted","Data":"c31e6ea5be4c8871907059dbaa4254d65c62fdf3b0b84cd9e852cc3afb55f573"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.233898 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerStarted","Data":"ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.233962 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerStarted","Data":"52ac13738bcc44eb2c626aac495ee988e42e40fb4197eeac5b06607395f21532"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.250028 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.275346 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.295785 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.318084 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.318138 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.318150 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.318169 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.318181 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.324094 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.345281 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.366540 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.384811 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.408568 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.421281 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.421347 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.421368 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.421394 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.421411 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.421786 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.434389 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.449478 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.463227 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.479994 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.496116 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.511158 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.523258 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.523296 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.523306 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.523319 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.523329 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.523591 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.534586 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.555716 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.569667 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.583820 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.594941 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.610873 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.622683 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.625376 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.625406 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.625418 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.625434 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.625446 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.640290 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.653811 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.668687 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:14Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.740743 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.743240 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.743629 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.743822 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.744075 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.847424 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.847483 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.847497 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.847515 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.847528 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.950244 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.950273 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.950281 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.950294 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:14 crc kubenswrapper[4996]: I0228 09:02:14.950303 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:14Z","lastTransitionTime":"2026-02-28T09:02:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.034552 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:15 crc kubenswrapper[4996]: E0228 09:02:15.034663 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.034987 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:15 crc kubenswrapper[4996]: E0228 09:02:15.035066 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.035116 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:15 crc kubenswrapper[4996]: E0228 09:02:15.035168 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.052213 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.052250 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.052260 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.052277 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.052288 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.155338 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.155601 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.155757 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.155918 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.156087 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.240234 4996 generic.go:334] "Generic (PLEG): container finished" podID="9e7ef261-12c5-4b48-b5e1-32dcaf0f4277" containerID="ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e" exitCode=0 Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.240315 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerDied","Data":"ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.254206 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.254276 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.254300 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.254320 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.254340 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.254357 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.258825 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.258912 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.258940 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.258972 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.258998 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.264462 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.281260 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.296546 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.318757 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.337305 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.352423 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.361495 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.361546 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.361558 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.361592 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.361606 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.366680 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.379522 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.398788 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.410729 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.428582 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.440924 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.463889 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.463949 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.463969 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.463996 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.464051 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.465589 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:15Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.567289 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.567354 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.567372 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.567403 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.567421 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.671362 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.671433 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.671451 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.671480 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.671500 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.774556 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.774622 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.774641 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.774666 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.774685 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.877593 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.877639 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.877650 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.877667 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.877681 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.981000 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.981111 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.981128 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.981151 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:15 crc kubenswrapper[4996]: I0228 09:02:15.981170 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:15Z","lastTransitionTime":"2026-02-28T09:02:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.083536 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.083582 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.083594 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.083618 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.083631 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.185817 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.185866 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.185875 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.185890 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.185900 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.265716 4996 generic.go:334] "Generic (PLEG): container finished" podID="9e7ef261-12c5-4b48-b5e1-32dcaf0f4277" containerID="7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024" exitCode=0 Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.265778 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerDied","Data":"7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.287839 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.290603 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.290644 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.290659 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.290679 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.290697 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.307432 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.331452 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.352020 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.367869 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.383572 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.393107 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.393138 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.393147 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.393160 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.393169 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.404754 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.419454 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.434247 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.443703 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.461876 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.474204 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.488270 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:16Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.495672 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.495701 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.495710 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.495723 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.495732 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.598343 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.598390 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.598407 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.598429 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.598445 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.701882 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.701952 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.701972 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.702031 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.702053 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.807359 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.807399 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.807411 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.807427 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.807439 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.909535 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.909601 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.909628 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.909657 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:16 crc kubenswrapper[4996]: I0228 09:02:16.909678 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:16Z","lastTransitionTime":"2026-02-28T09:02:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.011882 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.011945 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.011962 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.011988 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.012002 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.032175 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.032292 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:17 crc kubenswrapper[4996]: E0228 09:02:17.032434 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.032451 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:17 crc kubenswrapper[4996]: E0228 09:02:17.032603 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:17 crc kubenswrapper[4996]: E0228 09:02:17.032709 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.053442 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.078404 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.095257 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.107966 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.114832 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.114863 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.114874 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.114892 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.114904 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.130422 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.153254 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.167085 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.184846 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.202738 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.217676 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.217733 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.217758 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.217786 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.217803 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.219887 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.234648 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.254312 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.274456 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.281230 4996 generic.go:334] "Generic (PLEG): container finished" podID="9e7ef261-12c5-4b48-b5e1-32dcaf0f4277" containerID="57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753" exitCode=0 Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.281271 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerDied","Data":"57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.288666 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.305932 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.322829 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.322859 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.322871 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.322889 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.322923 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.323474 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.339121 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.359551 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.378111 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.389808 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.402753 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.413494 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.425818 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.425862 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.425875 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.425892 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.425903 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.426359 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.439073 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.448899 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.464332 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.479842 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:17Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.527839 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.527879 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.527893 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.527907 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.527918 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.630044 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.630096 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.630117 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.630142 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.630160 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.733628 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.733677 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.733695 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.733717 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.733733 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.841504 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.841869 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.841889 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.841922 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.841949 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.944153 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.944225 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.944248 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.944275 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:17 crc kubenswrapper[4996]: I0228 09:02:17.944298 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:17Z","lastTransitionTime":"2026-02-28T09:02:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.047670 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.047746 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.047764 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.047787 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.047808 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.151200 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.151255 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.151271 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.151294 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.151326 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.255331 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.255399 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.255435 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.255465 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.255488 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.289962 4996 generic.go:334] "Generic (PLEG): container finished" podID="9e7ef261-12c5-4b48-b5e1-32dcaf0f4277" containerID="f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311" exitCode=0 Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.290061 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerDied","Data":"f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.308954 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.339093 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-d7hbs"] Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.339678 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.340761 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.345991 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.346316 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.346615 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.346869 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.359506 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.359552 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.359565 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.359584 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.359600 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.363454 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.388563 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.404075 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.404115 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.404125 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.404139 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.404150 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.406506 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: E0228 09:02:18.427161 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.433456 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.434795 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/88053881-36e8-4bf7-b911-d2457f8bca30-host\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.434889 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz55l\" (UniqueName: \"kubernetes.io/projected/88053881-36e8-4bf7-b911-d2457f8bca30-kube-api-access-bz55l\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.434925 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/88053881-36e8-4bf7-b911-d2457f8bca30-serviceca\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.436496 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.436535 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.436552 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.436571 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.436584 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.454074 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: E0228 09:02:18.467659 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.468981 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.471913 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.471944 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.471952 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.471965 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.471974 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: E0228 09:02:18.486532 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.489706 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.489727 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.489735 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.489747 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.489756 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.492080 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: E0228 09:02:18.505931 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.508965 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.508986 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.508994 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.509021 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.509032 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.509902 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: E0228 09:02:18.520307 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: E0228 09:02:18.520439 4996 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.521483 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.521506 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.521515 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.521525 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.521534 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.522412 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.535585 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz55l\" (UniqueName: \"kubernetes.io/projected/88053881-36e8-4bf7-b911-d2457f8bca30-kube-api-access-bz55l\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.535617 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/88053881-36e8-4bf7-b911-d2457f8bca30-serviceca\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.535667 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/88053881-36e8-4bf7-b911-d2457f8bca30-host\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.535729 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/88053881-36e8-4bf7-b911-d2457f8bca30-host\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.536514 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/88053881-36e8-4bf7-b911-d2457f8bca30-serviceca\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.547358 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.552857 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz55l\" (UniqueName: \"kubernetes.io/projected/88053881-36e8-4bf7-b911-d2457f8bca30-kube-api-access-bz55l\") pod \"node-ca-d7hbs\" (UID: \"88053881-36e8-4bf7-b911-d2457f8bca30\") " pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.566078 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.578319 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.590787 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.604023 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.624242 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.624296 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.624313 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.624336 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.624353 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.624331 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.640183 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.653951 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.665096 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d7hbs" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.676151 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: W0228 09:02:18.682607 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88053881_36e8_4bf7_b911_d2457f8bca30.slice/crio-2bd105ba9381c8ec58c98d8a539ed8f5674bb05cbd248af9d340b0e2342ea652 WatchSource:0}: Error finding container 2bd105ba9381c8ec58c98d8a539ed8f5674bb05cbd248af9d340b0e2342ea652: Status 404 returned error can't find the container with id 2bd105ba9381c8ec58c98d8a539ed8f5674bb05cbd248af9d340b0e2342ea652 Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.697233 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.720907 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.727480 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.727515 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.727528 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.727543 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.727576 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.746332 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.763605 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.779402 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.795462 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.823490 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:18Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.831320 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.831386 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.831403 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.831426 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.831444 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.934199 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.934658 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.934678 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.934704 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:18 crc kubenswrapper[4996]: I0228 09:02:18.934723 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:18Z","lastTransitionTime":"2026-02-28T09:02:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.033186 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.033210 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.033166 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:19 crc kubenswrapper[4996]: E0228 09:02:19.033371 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:19 crc kubenswrapper[4996]: E0228 09:02:19.033569 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:19 crc kubenswrapper[4996]: E0228 09:02:19.033719 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.037963 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.038087 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.038115 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.038145 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.038171 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.141404 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.141438 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.141452 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.141472 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.141486 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.244377 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.244462 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.244484 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.244509 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.244533 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.298619 4996 generic.go:334] "Generic (PLEG): container finished" podID="9e7ef261-12c5-4b48-b5e1-32dcaf0f4277" containerID="66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f" exitCode=0 Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.298691 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerDied","Data":"66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.305610 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d7hbs" event={"ID":"88053881-36e8-4bf7-b911-d2457f8bca30","Type":"ContainerStarted","Data":"d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.305672 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d7hbs" event={"ID":"88053881-36e8-4bf7-b911-d2457f8bca30","Type":"ContainerStarted","Data":"2bd105ba9381c8ec58c98d8a539ed8f5674bb05cbd248af9d340b0e2342ea652"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.315519 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.331644 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.346740 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.346785 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.346797 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.346816 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.346828 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.356365 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.377202 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.398048 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.425142 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.439991 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.449533 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.449565 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.449590 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.449604 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.449614 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.453120 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.466159 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.489861 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.501145 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.521539 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.542977 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.552024 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.552062 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.552072 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.552086 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.552095 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.556421 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.573108 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.586292 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.604812 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.635416 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.654986 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.655674 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.655727 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.655748 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.655777 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.655799 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.672382 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.689974 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.705794 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.723675 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.745266 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.758564 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.758602 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.758614 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.758632 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.758645 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.767367 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.800044 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.823317 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.841047 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:19Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.860560 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.860590 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.860601 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.860617 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.860629 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.963948 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.964045 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.964065 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.964089 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:19 crc kubenswrapper[4996]: I0228 09:02:19.964110 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:19Z","lastTransitionTime":"2026-02-28T09:02:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.066729 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.066791 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.066809 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.066834 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.066852 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.170129 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.170184 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.170202 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.170226 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.170244 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.274764 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.274857 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.274881 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.274909 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.274930 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.315629 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.316180 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.319724 4996 generic.go:334] "Generic (PLEG): container finished" podID="9e7ef261-12c5-4b48-b5e1-32dcaf0f4277" containerID="2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec" exitCode=0 Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.319787 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerDied","Data":"2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.377685 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.377761 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.377785 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.377817 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.377837 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.384993 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.386240 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.402567 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.427166 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.443223 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.462675 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.473661 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.484473 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.484526 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.484545 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.484580 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.484598 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.491292 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.508123 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.524483 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.537955 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.563397 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.577937 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.588976 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.589037 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.589053 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.589075 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.589090 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.593518 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.605095 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.629952 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.650054 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.665400 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.679982 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.691092 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.691125 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.691135 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.691149 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.691160 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.693272 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.705645 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.716271 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.741661 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.754324 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.771963 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.787840 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.793725 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.793770 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.793780 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.793796 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.793809 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.805803 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.823166 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.833338 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:20Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.896638 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.896703 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.896725 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.896751 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.896770 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.999228 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.999268 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.999279 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:20 crc kubenswrapper[4996]: I0228 09:02:20.999467 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:20.999481 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:20Z","lastTransitionTime":"2026-02-28T09:02:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.032133 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.032248 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:21 crc kubenswrapper[4996]: E0228 09:02:21.032441 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.032496 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:21 crc kubenswrapper[4996]: E0228 09:02:21.032947 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:21 crc kubenswrapper[4996]: E0228 09:02:21.033212 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.033381 4996 scope.go:117] "RemoveContainer" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.102169 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.102228 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.102247 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.102269 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.102285 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.205205 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.205256 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.205272 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.205294 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.205311 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.307609 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.307648 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.307663 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.307731 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.307750 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.330453 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" event={"ID":"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277","Type":"ContainerStarted","Data":"796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.331501 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.331554 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.355908 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.359308 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.369915 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.381699 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.392545 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.400983 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.410211 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.410249 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.410259 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.410304 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.410317 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.412845 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.427347 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.441675 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.460636 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.477225 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.488247 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.500985 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.513661 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.513733 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.513757 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.513786 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.513809 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.517475 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.534813 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.547847 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.561290 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.574022 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.599199 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.617199 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.617257 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.617273 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.617296 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.617316 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.619784 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.641404 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.653469 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.675400 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.691294 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.707709 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.719510 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.719572 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.719581 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.719611 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.719622 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.727382 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.744105 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.760074 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.772892 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:21Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.823044 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.823099 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.823114 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.823136 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.823153 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.925455 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.925511 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.925528 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.925549 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:21 crc kubenswrapper[4996]: I0228 09:02:21.925562 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:21Z","lastTransitionTime":"2026-02-28T09:02:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.027609 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.027654 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.027663 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.027677 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.027689 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.130710 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.130753 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.130765 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.130782 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.130792 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.232843 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.232890 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.232901 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.232921 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.232935 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.335574 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.335626 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.335636 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.335650 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.335660 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.340907 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.344042 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.345033 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.361522 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.391422 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.405207 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.420606 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.438582 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.438616 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.438627 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.438643 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.438658 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.439274 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.448575 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.469166 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.490676 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.508245 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.520545 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.536800 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.541268 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.541329 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.541339 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.541355 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.541389 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.556958 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.572793 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.591119 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:22Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.643894 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.644215 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.644225 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.644239 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.644250 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.747829 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.747886 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.747895 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.747909 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.747919 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.850320 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.850383 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.850399 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.850422 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.850440 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.952881 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.952933 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.952945 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.952965 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:22 crc kubenswrapper[4996]: I0228 09:02:22.952978 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:22Z","lastTransitionTime":"2026-02-28T09:02:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.032972 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.033034 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:23 crc kubenswrapper[4996]: E0228 09:02:23.033193 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.033282 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:23 crc kubenswrapper[4996]: E0228 09:02:23.033465 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:23 crc kubenswrapper[4996]: E0228 09:02:23.033589 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.055096 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.055166 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.055192 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.055222 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.055247 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.157682 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.157729 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.157742 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.157760 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.157776 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.260087 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.260130 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.260142 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.260158 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.260173 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.350825 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/0.log" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.355365 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6" exitCode=1 Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.355430 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.356525 4996 scope.go:117] "RemoveContainer" containerID="f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.362153 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.362197 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.362213 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.362237 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.362256 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.372211 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.399600 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.419242 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.455482 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:22Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602735 6803 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.602797 6803 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602898 6803 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603048 6803 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603239 6803 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603731 6803 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0228 09:02:22.603806 6803 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 09:02:22.603816 6803 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 09:02:22.603834 6803 factory.go:656] Stopping watch factory\\\\nI0228 09:02:22.603851 6803 ovnkube.go:599] Stopped ovnkube\\\\nI0228 09:02:22.603886 6803 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0228 09:02:22.603905 6803 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.465387 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.465436 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.465459 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.465484 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.465502 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.476323 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.493797 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.520086 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.541843 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.564595 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.568296 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.568324 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.568339 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.568357 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.568368 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.604387 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.647902 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.664449 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.670095 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.670136 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.670146 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.670161 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.670171 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.680902 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.692557 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:23Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.772723 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.772764 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.772774 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.772788 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.772798 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.874853 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.874897 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.874909 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.874924 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.874935 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.979128 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.979171 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.979182 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.979208 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:23 crc kubenswrapper[4996]: I0228 09:02:23.979222 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:23Z","lastTransitionTime":"2026-02-28T09:02:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.081104 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.081139 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.081147 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.081162 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.081173 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.183513 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.183562 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.183578 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.183600 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.183617 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.286154 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.286199 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.286215 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.286237 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.286253 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.362542 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/0.log" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.366663 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.367291 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.369629 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5"] Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.370389 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.373368 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.373396 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.388626 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.388672 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.388688 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.388708 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.388722 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.400994 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.422464 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.445181 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.458625 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.483762 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.493282 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.493378 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.493402 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.493435 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.493455 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.498980 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.513380 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.526612 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.526910 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e2a5c8-d6c8-4512-a359-749b6b66d989-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.526976 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e2a5c8-d6c8-4512-a359-749b6b66d989-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.527020 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e2a5c8-d6c8-4512-a359-749b6b66d989-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.527049 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-282z6\" (UniqueName: \"kubernetes.io/projected/20e2a5c8-d6c8-4512-a359-749b6b66d989-kube-api-access-282z6\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.546771 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:22Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602735 6803 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.602797 6803 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602898 6803 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603048 6803 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603239 6803 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603731 6803 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0228 09:02:22.603806 6803 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 09:02:22.603816 6803 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 09:02:22.603834 6803 factory.go:656] Stopping watch factory\\\\nI0228 09:02:22.603851 6803 ovnkube.go:599] Stopped ovnkube\\\\nI0228 09:02:22.603886 6803 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0228 09:02:22.603905 6803 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.561620 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.581656 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.596384 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.596457 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.596481 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.596515 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.596540 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.599527 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.619234 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.627815 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e2a5c8-d6c8-4512-a359-749b6b66d989-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.627912 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e2a5c8-d6c8-4512-a359-749b6b66d989-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.627956 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e2a5c8-d6c8-4512-a359-749b6b66d989-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.627993 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-282z6\" (UniqueName: \"kubernetes.io/projected/20e2a5c8-d6c8-4512-a359-749b6b66d989-kube-api-access-282z6\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.629149 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/20e2a5c8-d6c8-4512-a359-749b6b66d989-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.629410 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/20e2a5c8-d6c8-4512-a359-749b6b66d989-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.634159 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.636218 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/20e2a5c8-d6c8-4512-a359-749b6b66d989-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.649087 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.652531 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-282z6\" (UniqueName: \"kubernetes.io/projected/20e2a5c8-d6c8-4512-a359-749b6b66d989-kube-api-access-282z6\") pod \"ovnkube-control-plane-749d76644c-zf2l5\" (UID: \"20e2a5c8-d6c8-4512-a359-749b6b66d989\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.660250 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.690134 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.693197 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.703442 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.703510 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.703530 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.703556 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.703575 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.716475 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.738828 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.759187 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.780860 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:22Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602735 6803 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.602797 6803 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602898 6803 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603048 6803 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603239 6803 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603731 6803 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0228 09:02:22.603806 6803 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 09:02:22.603816 6803 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 09:02:22.603834 6803 factory.go:656] Stopping watch factory\\\\nI0228 09:02:22.603851 6803 ovnkube.go:599] Stopped ovnkube\\\\nI0228 09:02:22.603886 6803 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0228 09:02:22.603905 6803 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.797757 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.805951 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.805995 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.806022 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.806041 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.806053 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.865454 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.882390 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.893564 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.909529 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.909564 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.909575 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.909591 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.909604 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:24Z","lastTransitionTime":"2026-02-28T09:02:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.914309 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.926868 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.945344 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:24 crc kubenswrapper[4996]: I0228 09:02:24.964950 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:24Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.013234 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.013281 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.013290 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.013304 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.013314 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.032328 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.032401 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.032448 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.032495 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.032663 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.032774 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.117808 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.117868 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.117886 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.117910 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.117930 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.125434 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9n7bm"] Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.126314 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.126419 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.163404 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.168814 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdk24\" (UniqueName: \"kubernetes.io/projected/326e8318-b5b5-4d7b-a838-01d28808161b-kube-api-access-pdk24\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.169258 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.181657 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.199477 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.221672 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.221720 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.221732 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.221752 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.221766 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.222454 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.237566 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.252099 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.270163 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.270218 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdk24\" (UniqueName: \"kubernetes.io/projected/326e8318-b5b5-4d7b-a838-01d28808161b-kube-api-access-pdk24\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.270567 4996 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.270799 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs podName:326e8318-b5b5-4d7b-a838-01d28808161b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:25.770774229 +0000 UTC m=+109.461577070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs") pod "network-metrics-daemon-9n7bm" (UID: "326e8318-b5b5-4d7b-a838-01d28808161b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.277375 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.292272 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.294320 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdk24\" (UniqueName: \"kubernetes.io/projected/326e8318-b5b5-4d7b-a838-01d28808161b-kube-api-access-pdk24\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.314433 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:22Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602735 6803 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.602797 6803 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602898 6803 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603048 6803 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603239 6803 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603731 6803 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0228 09:02:22.603806 6803 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 09:02:22.603816 6803 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 09:02:22.603834 6803 factory.go:656] Stopping watch factory\\\\nI0228 09:02:22.603851 6803 ovnkube.go:599] Stopped ovnkube\\\\nI0228 09:02:22.603886 6803 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0228 09:02:22.603905 6803 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.325154 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.325224 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.325247 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.325282 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.325307 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.327894 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.341146 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.354426 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.371961 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" event={"ID":"20e2a5c8-d6c8-4512-a359-749b6b66d989","Type":"ContainerStarted","Data":"5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.372040 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" event={"ID":"20e2a5c8-d6c8-4512-a359-749b6b66d989","Type":"ContainerStarted","Data":"d02208051e0342fe1a1bec734d263fee7dcc94beabb93a7d81eb930da6018a01"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.373774 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.374595 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/1.log" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.375586 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/0.log" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.380672 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01" exitCode=1 Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.380718 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.380752 4996 scope.go:117] "RemoveContainer" containerID="f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.382237 4996 scope.go:117] "RemoveContainer" containerID="7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01" Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.382566 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.388986 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.408175 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.422135 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.428512 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.428550 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.428562 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.428581 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.428593 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.443887 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.460905 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.472824 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.483491 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.500811 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.515141 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.525976 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.530840 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.530867 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.530878 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.530895 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.530908 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.538425 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.546304 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.557890 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.577666 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.594054 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.611957 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:22Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602735 6803 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.602797 6803 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602898 6803 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603048 6803 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603239 6803 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603731 6803 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0228 09:02:22.603806 6803 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 09:02:22.603816 6803 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 09:02:22.603834 6803 factory.go:656] Stopping watch factory\\\\nI0228 09:02:22.603851 6803 ovnkube.go:599] Stopped ovnkube\\\\nI0228 09:02:22.603886 6803 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0228 09:02:22.603905 6803 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 09:02:24.269638 7031 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 09:02:24.269651 7031 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.624459 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.633255 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.633716 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.633828 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.633903 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.633979 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.634056 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.648676 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:25Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.737238 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.737293 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.737306 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.737325 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.737338 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.776407 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.776705 4996 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:25 crc kubenswrapper[4996]: E0228 09:02:25.776787 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs podName:326e8318-b5b5-4d7b-a838-01d28808161b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:26.77676614 +0000 UTC m=+110.467568961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs") pod "network-metrics-daemon-9n7bm" (UID: "326e8318-b5b5-4d7b-a838-01d28808161b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.841300 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.841349 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.841361 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.841378 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.841391 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.944361 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.944406 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.944415 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.944429 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:25 crc kubenswrapper[4996]: I0228 09:02:25.944440 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:25Z","lastTransitionTime":"2026-02-28T09:02:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.046880 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.047219 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.047292 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.047516 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.047626 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.150418 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.150460 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.150471 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.150485 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.150496 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.253908 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.254252 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.254266 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.254291 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.254305 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.357135 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.357179 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.357190 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.357206 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.357217 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.387192 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" event={"ID":"20e2a5c8-d6c8-4512-a359-749b6b66d989","Type":"ContainerStarted","Data":"c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.390051 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/1.log" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.394768 4996 scope.go:117] "RemoveContainer" containerID="7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01" Feb 28 09:02:26 crc kubenswrapper[4996]: E0228 09:02:26.394911 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.421834 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.442926 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.460624 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.461285 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.461360 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.461385 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.461415 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.461438 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.478803 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.495404 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.512873 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.538704 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.554816 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.564402 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.564439 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.564450 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.564466 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.564477 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.582803 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f50961640f075f1e8552bcb3f8aca005b9acb1474c3de62002515b84d6feb6e6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:22Z\\\",\\\"message\\\":\\\"/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602735 6803 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.602797 6803 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 09:02:22.602898 6803 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603048 6803 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603239 6803 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 09:02:22.603731 6803 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0228 09:02:22.603806 6803 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0228 09:02:22.603816 6803 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0228 09:02:22.603834 6803 factory.go:656] Stopping watch factory\\\\nI0228 09:02:22.603851 6803 ovnkube.go:599] Stopped ovnkube\\\\nI0228 09:02:22.603886 6803 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0228 09:02:22.603905 6803 handler.go:208] Removed *v1.Node event handler 2\\\\nI02\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 09:02:24.269638 7031 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 09:02:24.269651 7031 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.598298 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.612696 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.623626 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.638372 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.650904 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.666899 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.666943 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.666954 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.666970 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.666984 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.670044 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.684501 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.702250 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.721922 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.736868 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.752372 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.769169 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.769211 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.769224 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.769240 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.769254 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.785714 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.788067 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:26 crc kubenswrapper[4996]: E0228 09:02:26.788296 4996 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:26 crc kubenswrapper[4996]: E0228 09:02:26.788412 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs podName:326e8318-b5b5-4d7b-a838-01d28808161b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:28.788387583 +0000 UTC m=+112.479190484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs") pod "network-metrics-daemon-9n7bm" (UID: "326e8318-b5b5-4d7b-a838-01d28808161b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.806192 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.825236 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.843727 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.865972 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.871547 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.871592 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.871603 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.871619 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.871632 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.879455 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.897558 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.912593 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.931229 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 09:02:24.269638 7031 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 09:02:24.269651 7031 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.942351 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.952033 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.960348 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:26Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.974131 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.974165 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.974175 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.974193 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:26 crc kubenswrapper[4996]: I0228 09:02:26.974208 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:26Z","lastTransitionTime":"2026-02-28T09:02:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.032150 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.032196 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.032248 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.032304 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.032789 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.032605 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.032919 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.032734 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.055699 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.074839 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.077083 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.077234 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.077344 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.077629 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.077737 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.095309 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.112357 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.143754 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.172639 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.181553 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.181633 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.181659 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.181691 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.181718 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.188494 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.205035 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.220971 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.236551 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.261214 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.273459 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.284444 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.284582 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.284607 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.284640 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.284664 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.303809 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 09:02:24.269638 7031 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 09:02:24.269651 7031 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.321147 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.335022 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.345537 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:27Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.401373 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.401426 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.401442 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.401461 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.401474 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.501138 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.501291 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:59.501269161 +0000 UTC m=+143.192071982 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.503475 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.503553 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.503579 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.503611 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.503636 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.606072 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.606137 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.606155 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.606176 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.606193 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.710059 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.710123 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.710140 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.710165 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.710183 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.813321 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.813387 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.813406 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.813431 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.813452 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.904353 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.904649 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.904698 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.904712 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:27 crc kubenswrapper[4996]: E0228 09:02:27.904789 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 09:02:59.904769641 +0000 UTC m=+143.595572462 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.916439 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.916537 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.916556 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.916581 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:27 crc kubenswrapper[4996]: I0228 09:02:27.916637 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:27Z","lastTransitionTime":"2026-02-28T09:02:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.005388 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.005449 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.005476 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.005581 4996 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.005600 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.005638 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:03:00.005620032 +0000 UTC m=+143.696422854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.005651 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.005680 4996 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.005706 4996 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.005791 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 09:03:00.005758136 +0000 UTC m=+143.696560987 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.005831 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:03:00.005813847 +0000 UTC m=+143.696616698 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.019771 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.019859 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.019880 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.019902 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.019958 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.122564 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.122624 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.122641 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.122665 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.122682 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.226134 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.226203 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.226214 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.226236 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.226249 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.328745 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.328804 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.328815 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.328829 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.328839 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.431880 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.431914 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.431923 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.431937 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.431947 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.534417 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.534456 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.534468 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.534485 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.534497 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.637576 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.637645 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.637662 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.637686 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.637705 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.705309 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.705365 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.705378 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.705395 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.705409 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.723487 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:28Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.728146 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.728201 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.728218 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.728242 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.728259 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.747302 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:28Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.752235 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.752293 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.752311 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.752335 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.752356 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.770352 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:28Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.776502 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.776617 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.776659 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.776694 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.776720 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.791940 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:28Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.795691 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.795753 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.795770 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.795796 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.795812 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.809798 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:28Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.809936 4996 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.811803 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.811934 4996 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:28 crc kubenswrapper[4996]: E0228 09:02:28.812033 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs podName:326e8318-b5b5-4d7b-a838-01d28808161b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:32.811982457 +0000 UTC m=+116.502785278 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs") pod "network-metrics-daemon-9n7bm" (UID: "326e8318-b5b5-4d7b-a838-01d28808161b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.812559 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.812586 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.812597 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.812610 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.812621 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.914858 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.914909 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.914922 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.914939 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:28 crc kubenswrapper[4996]: I0228 09:02:28.914952 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:28Z","lastTransitionTime":"2026-02-28T09:02:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.018062 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.018131 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.018155 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.018184 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.018203 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.036216 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.036244 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.036253 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.036323 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:29 crc kubenswrapper[4996]: E0228 09:02:29.036494 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:29 crc kubenswrapper[4996]: E0228 09:02:29.036804 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:29 crc kubenswrapper[4996]: E0228 09:02:29.036896 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:29 crc kubenswrapper[4996]: E0228 09:02:29.036998 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.120604 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.120660 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.120677 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.120701 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.120718 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.224044 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.224103 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.224118 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.224140 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.224156 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.327191 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.327239 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.327248 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.327263 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.327274 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.429852 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.429925 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.429944 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.429968 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.429986 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.533274 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.533348 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.533366 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.533392 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.533410 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.636584 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.636653 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.636670 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.636695 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.636777 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.739796 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.739860 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.739880 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.739906 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.739926 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.842239 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.842320 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.842335 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.842375 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.842391 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.945524 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.945586 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.945603 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.945632 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:29 crc kubenswrapper[4996]: I0228 09:02:29.945653 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:29Z","lastTransitionTime":"2026-02-28T09:02:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.048710 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.048755 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.048770 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.048789 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.048804 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.152435 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.152504 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.152521 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.152545 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.152565 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.255445 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.255498 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.255515 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.255537 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.255554 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.359137 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.359209 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.359223 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.359247 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.359260 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.461835 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.461872 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.461881 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.461896 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.461906 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.564821 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.564869 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.564883 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.564901 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.564923 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.668667 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.668809 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.668841 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.668876 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.668901 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.771883 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.771954 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.771965 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.771985 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.771998 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.874682 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.874717 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.874728 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.874747 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.874764 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.977605 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.977692 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.977761 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.977790 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:30 crc kubenswrapper[4996]: I0228 09:02:30.977812 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:30Z","lastTransitionTime":"2026-02-28T09:02:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.032531 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.032599 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.032543 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.032527 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:31 crc kubenswrapper[4996]: E0228 09:02:31.032729 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:31 crc kubenswrapper[4996]: E0228 09:02:31.032905 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:31 crc kubenswrapper[4996]: E0228 09:02:31.033182 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:31 crc kubenswrapper[4996]: E0228 09:02:31.033321 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.080291 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.080345 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.080358 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.080377 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.080388 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.183607 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.183668 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.183686 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.183714 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.183734 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.286242 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.286309 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.286329 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.286354 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.286376 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.388928 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.389002 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.389065 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.389093 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.389111 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.492160 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.492232 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.492250 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.492274 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.492293 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.595489 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.595583 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.595612 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.595646 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.595672 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.699364 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.699449 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.699476 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.699505 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.699528 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.802812 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.802885 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.802909 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.802938 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.802960 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.906500 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.906561 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.906586 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.906617 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:31 crc kubenswrapper[4996]: I0228 09:02:31.906638 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:31Z","lastTransitionTime":"2026-02-28T09:02:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.009750 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.009817 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.009838 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.009865 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.009887 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.113586 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.113644 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.113657 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.113676 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.113690 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.216839 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.216888 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.216900 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.216917 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.216929 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.319545 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.319615 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.319642 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.319670 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.319692 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.422207 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.422245 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.422257 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.422272 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.422285 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.525746 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.525809 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.525826 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.525856 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.525880 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.629372 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.629450 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.629475 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.629504 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.629527 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.732112 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.732163 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.732175 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.732192 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.732206 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.834425 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.834495 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.834513 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.834539 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.834559 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.863656 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:32 crc kubenswrapper[4996]: E0228 09:02:32.863873 4996 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:32 crc kubenswrapper[4996]: E0228 09:02:32.863966 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs podName:326e8318-b5b5-4d7b-a838-01d28808161b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:40.863940879 +0000 UTC m=+124.554743730 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs") pod "network-metrics-daemon-9n7bm" (UID: "326e8318-b5b5-4d7b-a838-01d28808161b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.937896 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.937962 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.937981 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.938045 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:32 crc kubenswrapper[4996]: I0228 09:02:32.938068 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:32Z","lastTransitionTime":"2026-02-28T09:02:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.032504 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.032521 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.032675 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:33 crc kubenswrapper[4996]: E0228 09:02:33.032983 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.033052 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:33 crc kubenswrapper[4996]: E0228 09:02:33.033231 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:33 crc kubenswrapper[4996]: E0228 09:02:33.033440 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:33 crc kubenswrapper[4996]: E0228 09:02:33.033598 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.040426 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.040493 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.040514 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.040536 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.040554 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.143291 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.143354 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.143373 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.143399 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.143416 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.245830 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.245907 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.245925 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.245949 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.245968 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.348902 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.348966 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.348983 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.349022 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.349035 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.451340 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.451413 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.451437 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.451463 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.451485 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.554526 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.554594 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.554613 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.554636 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.554655 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.657864 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.657933 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.657957 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.657988 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.658056 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.761668 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.762166 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.762347 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.762494 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.762624 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.865031 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.865110 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.865122 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.865137 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.865147 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.970095 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.970178 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.970204 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.970236 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:33 crc kubenswrapper[4996]: I0228 09:02:33.970333 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:33Z","lastTransitionTime":"2026-02-28T09:02:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.073772 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.073820 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.073834 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.073853 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.073864 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.177952 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.178038 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.178056 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.178083 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.178101 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.282138 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.282190 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.282208 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.282232 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.282251 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.384701 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.384769 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.384788 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.384813 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.384833 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.487111 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.487157 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.487168 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.487183 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.487194 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.589560 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.589601 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.589611 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.589626 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.589638 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.601993 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.625540 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.646504 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.666234 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.685990 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.692076 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.692130 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.692148 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.692170 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.692188 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.726809 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.751743 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.767393 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.787658 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.794231 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.794278 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.794292 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.794315 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.794331 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.799472 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.812885 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.830880 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.842467 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.862206 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 09:02:24.269638 7031 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 09:02:24.269651 7031 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.875767 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.885873 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.896852 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:34Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.917904 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.917939 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.917950 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.917965 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:34 crc kubenswrapper[4996]: I0228 09:02:34.917978 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:34Z","lastTransitionTime":"2026-02-28T09:02:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.020707 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.020770 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.020785 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.020807 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.020824 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.033067 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:35 crc kubenswrapper[4996]: E0228 09:02:35.033252 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.033299 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:35 crc kubenswrapper[4996]: E0228 09:02:35.033422 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.033444 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.033269 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:35 crc kubenswrapper[4996]: E0228 09:02:35.033628 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:35 crc kubenswrapper[4996]: E0228 09:02:35.033795 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.123114 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.123181 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.123204 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.123232 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.123267 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.226625 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.226702 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.226726 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.226941 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.226965 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.329651 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.329715 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.329733 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.329760 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.329780 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.432939 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.433037 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.433064 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.433092 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.433120 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.536494 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.536541 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.536557 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.536580 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.536599 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.639550 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.639621 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.639639 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.639663 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.639680 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.742490 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.742545 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.742556 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.742574 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.742590 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.845812 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.845905 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.845929 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.845961 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.845984 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.948959 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.949047 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.949066 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.949089 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:35 crc kubenswrapper[4996]: I0228 09:02:35.949106 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:35Z","lastTransitionTime":"2026-02-28T09:02:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.051459 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.051950 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.052034 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.052128 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.052196 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.155302 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.155360 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.155383 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.155411 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.155434 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.258722 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.258766 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.258778 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.258793 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.258802 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.362339 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.362390 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.362407 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.362427 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.362441 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.466081 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.466162 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.466187 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.466223 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.466247 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.569362 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.569425 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.569443 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.569468 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.569489 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.674686 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.674762 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.674788 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.674820 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.674843 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.777971 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.778097 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.778118 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.778144 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.778162 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.881414 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.881475 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.881497 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.881524 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:36 crc kubenswrapper[4996]: I0228 09:02:36.881576 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:36Z","lastTransitionTime":"2026-02-28T09:02:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:36 crc kubenswrapper[4996]: E0228 09:02:36.982246 4996 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.032180 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.032246 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:37 crc kubenswrapper[4996]: E0228 09:02:37.032359 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:37 crc kubenswrapper[4996]: E0228 09:02:37.032518 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.033563 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.033571 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:37 crc kubenswrapper[4996]: E0228 09:02:37.033691 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:37 crc kubenswrapper[4996]: E0228 09:02:37.033828 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.055896 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.072309 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.090470 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.111489 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.130993 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: E0228 09:02:37.151828 4996 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.153658 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.172426 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.201746 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.228834 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.246713 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.261771 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.273564 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.289880 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.312275 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.330278 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:37 crc kubenswrapper[4996]: I0228 09:02:37.355429 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 09:02:24.269638 7031 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 09:02:24.269651 7031 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:37Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.011899 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.011957 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.011974 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.011995 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.012047 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:39Z","lastTransitionTime":"2026-02-28T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.032453 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.032521 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.032593 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.032797 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.032953 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.033154 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.033680 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.033984 4996 scope.go:117] "RemoveContainer" containerID="7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01" Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.034469 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.034576 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.045256 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.045498 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.045685 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.045750 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.045780 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:39Z","lastTransitionTime":"2026-02-28T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.070158 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.077207 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.077274 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.077297 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.077329 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.077352 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:39Z","lastTransitionTime":"2026-02-28T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.099891 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.106366 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.106448 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.106466 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.106492 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.106510 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:39Z","lastTransitionTime":"2026-02-28T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.128401 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.135250 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.135314 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.135333 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.135359 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.135378 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:39Z","lastTransitionTime":"2026-02-28T09:02:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.156249 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: E0228 09:02:39.156467 4996 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.456876 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/1.log" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.459848 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8"} Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.460411 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.478634 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.495185 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.513075 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.529092 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.541762 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.561404 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.576441 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.606814 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.636810 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.664109 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.679930 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.701855 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 09:02:24.269638 7031 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 09:02:24.269651 7031 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.715245 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.733619 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.752869 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:39 crc kubenswrapper[4996]: I0228 09:02:39.771842 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:39Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.468140 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/2.log" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.469569 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/1.log" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.473877 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8" exitCode=1 Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.473944 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8"} Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.473994 4996 scope.go:117] "RemoveContainer" containerID="7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.475394 4996 scope.go:117] "RemoveContainer" containerID="f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8" Feb 28 09:02:40 crc kubenswrapper[4996]: E0228 09:02:40.475747 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.498985 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.518464 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.537314 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.559716 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.580151 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.601868 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.622114 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.655363 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.679626 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.701927 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.720076 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.736237 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.757430 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.778082 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.794539 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.837846 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7727b77917593ba0760e912a73c64ad64f293999719714ea41c7ac36f5a6dc01\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"message\\\":\\\"Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 09:02:24.269638 7031 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"8b82f026-5975-4a1b-bb18-08d5d51147ec\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.38\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 09:02:24.269651 7031 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:40Z\\\",\\\"message\\\":\\\"270 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0228 09:02:40.067855 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller\\\\nI0228 09:02:40.067863 7270 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 09:02:40.067871 7270 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0228 09:02:40.067865 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 11.011µs\\\\nI0228 09:02:40.067924 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller\\\\nI0228 09:02:40.067958 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 36.171µs\\\\nI0228 09:02:40.067990 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller\\\\nI0228 09:02:40.068038 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 47.572µs\\\\nF0228 09:02:40.067925 7270 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start de\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:40Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:40 crc kubenswrapper[4996]: I0228 09:02:40.890825 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:40 crc kubenswrapper[4996]: E0228 09:02:40.891078 4996 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:40 crc kubenswrapper[4996]: E0228 09:02:40.891179 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs podName:326e8318-b5b5-4d7b-a838-01d28808161b nodeName:}" failed. No retries permitted until 2026-02-28 09:02:56.891159608 +0000 UTC m=+140.581962419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs") pod "network-metrics-daemon-9n7bm" (UID: "326e8318-b5b5-4d7b-a838-01d28808161b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.032624 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.032676 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.033028 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:41 crc kubenswrapper[4996]: E0228 09:02:41.033154 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:41 crc kubenswrapper[4996]: E0228 09:02:41.033256 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:41 crc kubenswrapper[4996]: E0228 09:02:41.033328 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.033647 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:41 crc kubenswrapper[4996]: E0228 09:02:41.033811 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.049065 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.480900 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/2.log" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.486607 4996 scope.go:117] "RemoveContainer" containerID="f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8" Feb 28 09:02:41 crc kubenswrapper[4996]: E0228 09:02:41.486941 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.507910 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.535057 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.555894 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.573374 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.595419 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.616951 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.638139 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.655048 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.688950 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.706212 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.730113 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.751274 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.788929 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:40Z\\\",\\\"message\\\":\\\"270 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0228 09:02:40.067855 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller\\\\nI0228 09:02:40.067863 7270 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 09:02:40.067871 7270 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0228 09:02:40.067865 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 11.011µs\\\\nI0228 09:02:40.067924 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller\\\\nI0228 09:02:40.067958 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 36.171µs\\\\nI0228 09:02:40.067990 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller\\\\nI0228 09:02:40.068038 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 47.572µs\\\\nF0228 09:02:40.067925 7270 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start de\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.807110 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed16ad2e-d428-413f-8052-78a3fd2616bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c128833e1f1b2f2a81cc9796fb6dee1675fa39b5a1c0dc266f6a5bf7dab6c98e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:05Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 09:00:39.137156 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 09:00:39.139528 1 observer_polling.go:159] Starting file observer\\\\nI0228 09:00:39.179174 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 09:00:39.184463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 09:01:05.334768 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 09:01:05.335063 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5f7ed00388bdbf05be73ad0d372e977cbfc875eeb82ae16cf1ef5a834d59903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14cfe234b07ea83736ecdebb6e64914bceec4444d7faf3d57168e8bf6d1ab315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.825498 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.839567 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:41 crc kubenswrapper[4996]: I0228 09:02:41.860301 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:41Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:42 crc kubenswrapper[4996]: E0228 09:02:42.153294 4996 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 09:02:43 crc kubenswrapper[4996]: I0228 09:02:43.032879 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:43 crc kubenswrapper[4996]: I0228 09:02:43.032942 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:43 crc kubenswrapper[4996]: I0228 09:02:43.033089 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:43 crc kubenswrapper[4996]: E0228 09:02:43.033280 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:43 crc kubenswrapper[4996]: I0228 09:02:43.033852 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:43 crc kubenswrapper[4996]: E0228 09:02:43.033994 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:43 crc kubenswrapper[4996]: E0228 09:02:43.034172 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:43 crc kubenswrapper[4996]: E0228 09:02:43.034267 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:45 crc kubenswrapper[4996]: I0228 09:02:45.032510 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:45 crc kubenswrapper[4996]: I0228 09:02:45.032580 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:45 crc kubenswrapper[4996]: I0228 09:02:45.032711 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:45 crc kubenswrapper[4996]: E0228 09:02:45.032952 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:45 crc kubenswrapper[4996]: I0228 09:02:45.032997 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:45 crc kubenswrapper[4996]: E0228 09:02:45.033212 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:45 crc kubenswrapper[4996]: E0228 09:02:45.033324 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:45 crc kubenswrapper[4996]: E0228 09:02:45.033451 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.032301 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.032342 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.032298 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.032462 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:47 crc kubenswrapper[4996]: E0228 09:02:47.032798 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:47 crc kubenswrapper[4996]: E0228 09:02:47.032712 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:47 crc kubenswrapper[4996]: E0228 09:02:47.032535 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:47 crc kubenswrapper[4996]: E0228 09:02:47.032895 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.048803 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed16ad2e-d428-413f-8052-78a3fd2616bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c128833e1f1b2f2a81cc9796fb6dee1675fa39b5a1c0dc266f6a5bf7dab6c98e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:05Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 09:00:39.137156 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 09:00:39.139528 1 observer_polling.go:159] Starting file observer\\\\nI0228 09:00:39.179174 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 09:00:39.184463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 09:01:05.334768 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 09:01:05.335063 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5f7ed00388bdbf05be73ad0d372e977cbfc875eeb82ae16cf1ef5a834d59903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14cfe234b07ea83736ecdebb6e64914bceec4444d7faf3d57168e8bf6d1ab315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.062224 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.080588 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.091733 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.120810 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:40Z\\\",\\\"message\\\":\\\"270 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0228 09:02:40.067855 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller\\\\nI0228 09:02:40.067863 7270 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 09:02:40.067871 7270 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0228 09:02:40.067865 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 11.011µs\\\\nI0228 09:02:40.067924 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller\\\\nI0228 09:02:40.067958 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 36.171µs\\\\nI0228 09:02:40.067990 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller\\\\nI0228 09:02:40.068038 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 47.572µs\\\\nF0228 09:02:40.067925 7270 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start de\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.138264 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: E0228 09:02:47.154309 4996 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.158306 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.176555 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.197301 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.215962 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.232826 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.247243 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.279859 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.300562 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.315932 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.334695 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:47 crc kubenswrapper[4996]: I0228 09:02:47.346721 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:47Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.032928 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.032956 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.033050 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.033138 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.033293 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.033417 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.033469 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.033554 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.309430 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.309482 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.309500 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.309527 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.309546 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:49Z","lastTransitionTime":"2026-02-28T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.330387 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:49Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.335674 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.335764 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.335883 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.335937 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.335958 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:49Z","lastTransitionTime":"2026-02-28T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.355838 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:49Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.361172 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.361217 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.361235 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.361257 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.361334 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:49Z","lastTransitionTime":"2026-02-28T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.382322 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:49Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.388678 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.388733 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.388751 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.388774 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.388792 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:49Z","lastTransitionTime":"2026-02-28T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.409194 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:49Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.414214 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.414286 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.414306 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.414336 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:49 crc kubenswrapper[4996]: I0228 09:02:49.414355 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:49Z","lastTransitionTime":"2026-02-28T09:02:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.435568 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:49Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:49 crc kubenswrapper[4996]: E0228 09:02:49.435787 4996 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 09:02:51 crc kubenswrapper[4996]: I0228 09:02:51.032763 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:51 crc kubenswrapper[4996]: I0228 09:02:51.032804 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:51 crc kubenswrapper[4996]: E0228 09:02:51.032984 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:51 crc kubenswrapper[4996]: I0228 09:02:51.033060 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:51 crc kubenswrapper[4996]: I0228 09:02:51.033128 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:51 crc kubenswrapper[4996]: E0228 09:02:51.033379 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:51 crc kubenswrapper[4996]: E0228 09:02:51.033501 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:51 crc kubenswrapper[4996]: E0228 09:02:51.033557 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:52 crc kubenswrapper[4996]: I0228 09:02:52.033224 4996 scope.go:117] "RemoveContainer" containerID="f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8" Feb 28 09:02:52 crc kubenswrapper[4996]: E0228 09:02:52.033873 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" Feb 28 09:02:52 crc kubenswrapper[4996]: E0228 09:02:52.155890 4996 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 09:02:53 crc kubenswrapper[4996]: I0228 09:02:53.032163 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:53 crc kubenswrapper[4996]: I0228 09:02:53.032163 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:53 crc kubenswrapper[4996]: I0228 09:02:53.032797 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:53 crc kubenswrapper[4996]: E0228 09:02:53.033140 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:53 crc kubenswrapper[4996]: I0228 09:02:53.033185 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:53 crc kubenswrapper[4996]: E0228 09:02:53.033405 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:53 crc kubenswrapper[4996]: E0228 09:02:53.033829 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:53 crc kubenswrapper[4996]: E0228 09:02:53.034107 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:54 crc kubenswrapper[4996]: I0228 09:02:54.047713 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 28 09:02:55 crc kubenswrapper[4996]: I0228 09:02:55.032989 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:55 crc kubenswrapper[4996]: E0228 09:02:55.033394 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:55 crc kubenswrapper[4996]: I0228 09:02:55.033160 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:55 crc kubenswrapper[4996]: E0228 09:02:55.034000 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:55 crc kubenswrapper[4996]: I0228 09:02:55.033166 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:55 crc kubenswrapper[4996]: E0228 09:02:55.034116 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:55 crc kubenswrapper[4996]: I0228 09:02:55.033104 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:55 crc kubenswrapper[4996]: E0228 09:02:55.034187 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:56 crc kubenswrapper[4996]: I0228 09:02:56.982207 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:56 crc kubenswrapper[4996]: E0228 09:02:56.982388 4996 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:56 crc kubenswrapper[4996]: E0228 09:02:56.982469 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs podName:326e8318-b5b5-4d7b-a838-01d28808161b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.982451217 +0000 UTC m=+172.673254048 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs") pod "network-metrics-daemon-9n7bm" (UID: "326e8318-b5b5-4d7b-a838-01d28808161b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.032149 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:57 crc kubenswrapper[4996]: E0228 09:02:57.032685 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.032308 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:57 crc kubenswrapper[4996]: E0228 09:02:57.033164 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.032275 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:57 crc kubenswrapper[4996]: E0228 09:02:57.033593 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.034571 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:57 crc kubenswrapper[4996]: E0228 09:02:57.034933 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.053768 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.071664 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.084494 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.107686 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:40Z\\\",\\\"message\\\":\\\"270 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0228 09:02:40.067855 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller\\\\nI0228 09:02:40.067863 7270 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 09:02:40.067871 7270 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0228 09:02:40.067865 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 11.011µs\\\\nI0228 09:02:40.067924 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller\\\\nI0228 09:02:40.067958 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 36.171µs\\\\nI0228 09:02:40.067990 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller\\\\nI0228 09:02:40.068038 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 47.572µs\\\\nF0228 09:02:40.067925 7270 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start de\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.127407 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed16ad2e-d428-413f-8052-78a3fd2616bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c128833e1f1b2f2a81cc9796fb6dee1675fa39b5a1c0dc266f6a5bf7dab6c98e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:05Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 09:00:39.137156 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 09:00:39.139528 1 observer_polling.go:159] Starting file observer\\\\nI0228 09:00:39.179174 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 09:00:39.184463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 09:01:05.334768 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 09:01:05.335063 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5f7ed00388bdbf05be73ad0d372e977cbfc875eeb82ae16cf1ef5a834d59903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14cfe234b07ea83736ecdebb6e64914bceec4444d7faf3d57168e8bf6d1ab315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.146215 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.163935 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.176840 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.190477 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.207148 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.222159 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.235952 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.250695 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb7eb3b0-5d01-4163-b338-86c9be0e6dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b052d2385ced8432d3ef836df06e53e5cddc8cdf549c8b83719e06526a0cd9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fee7bb94b92e6dcb4ada0cfbb6fcd25ee6800820e3d1ec2d28fa55ac405c27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3ee97da3b6d5344dbdf716bcd2ad26eb73f3814a680f4ca76ed40222e9a4a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83383bc28500bd80dabf45e1c490811d9495809cfdaf6450cf575a53b3f7aaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83383bc28500bd80dabf45e1c490811d9495809cfdaf6450cf575a53b3f7aaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.270381 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.289220 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.302597 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.315599 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: I0228 09:02:57.342957 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:57Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:57 crc kubenswrapper[4996]: E0228 09:02:57.561861 4996 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.032981 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.033065 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.033157 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.033327 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.033376 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.033598 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.033721 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.033870 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.516983 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.517305 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:04:03.517272577 +0000 UTC m=+207.208075398 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.833335 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.833395 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.833407 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.833422 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.833432 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:59Z","lastTransitionTime":"2026-02-28T09:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.848175 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:59Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.853860 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.853919 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.853935 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.853957 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.853974 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:59Z","lastTransitionTime":"2026-02-28T09:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.871913 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:59Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.876323 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.876377 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.876390 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.876412 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.876425 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:59Z","lastTransitionTime":"2026-02-28T09:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.889619 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:59Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.894480 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.894547 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.894559 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.894579 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.894592 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:59Z","lastTransitionTime":"2026-02-28T09:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.911713 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:59Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.917696 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.917749 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.917765 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.917788 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.917807 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:02:59Z","lastTransitionTime":"2026-02-28T09:02:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:02:59 crc kubenswrapper[4996]: I0228 09:02:59.921565 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.921855 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.921903 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.921922 4996 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.922031 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 09:04:03.921975118 +0000 UTC m=+207.612777969 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.938129 4996 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"939d05c1-c101-41a6-8708-5f2e09c96113\\\",\\\"systemUUID\\\":\\\"51ba2d55-b443-4293-b707-d84c05817b7c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:02:59Z is after 2025-08-24T17:21:41Z" Feb 28 09:02:59 crc kubenswrapper[4996]: E0228 09:02:59.938486 4996 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.022981 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.023107 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:00 crc kubenswrapper[4996]: E0228 09:03:00.023201 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 09:03:00 crc kubenswrapper[4996]: E0228 09:03:00.023236 4996 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 09:03:00 crc kubenswrapper[4996]: E0228 09:03:00.023255 4996 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.023252 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:00 crc kubenswrapper[4996]: E0228 09:03:00.023269 4996 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:03:00 crc kubenswrapper[4996]: E0228 09:03:00.023326 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 09:04:04.023303095 +0000 UTC m=+207.714105936 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 09:03:00 crc kubenswrapper[4996]: E0228 09:03:00.023339 4996 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:03:00 crc kubenswrapper[4996]: E0228 09:03:00.023662 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:04:04.023634543 +0000 UTC m=+207.714437394 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 09:03:00 crc kubenswrapper[4996]: E0228 09:03:00.023699 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 09:04:04.023681194 +0000 UTC m=+207.714484045 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.584388 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-snglm_6ed5a0c7-4cae-4140-be04-b7a0f3899920/kube-multus/0.log" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.584468 4996 generic.go:334] "Generic (PLEG): container finished" podID="6ed5a0c7-4cae-4140-be04-b7a0f3899920" containerID="18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a" exitCode=1 Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.584506 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-snglm" event={"ID":"6ed5a0c7-4cae-4140-be04-b7a0f3899920","Type":"ContainerDied","Data":"18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a"} Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.585218 4996 scope.go:117] "RemoveContainer" containerID="18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.602779 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.616280 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.633548 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.652980 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb7eb3b0-5d01-4163-b338-86c9be0e6dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b052d2385ced8432d3ef836df06e53e5cddc8cdf549c8b83719e06526a0cd9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fee7bb94b92e6dcb4ada0cfbb6fcd25ee6800820e3d1ec2d28fa55ac405c27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3ee97da3b6d5344dbdf716bcd2ad26eb73f3814a680f4ca76ed40222e9a4a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83383bc28500bd80dabf45e1c490811d9495809cfdaf6450cf575a53b3f7aaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83383bc28500bd80dabf45e1c490811d9495809cfdaf6450cf575a53b3f7aaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.672963 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.688991 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.715536 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:03:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:03:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:03:00Z\\\",\\\"message\\\":\\\"2026-02-28T09:02:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fa9a50d-9c69-4135-b3b5-347bace01dd8\\\\n2026-02-28T09:02:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fa9a50d-9c69-4135-b3b5-347bace01dd8 to /host/opt/cni/bin/\\\\n2026-02-28T09:02:15Z [verbose] multus-daemon started\\\\n2026-02-28T09:02:15Z [verbose] Readiness Indicator file check\\\\n2026-02-28T09:03:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.732139 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.767863 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.788542 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.800139 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.813742 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.825116 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.837688 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed16ad2e-d428-413f-8052-78a3fd2616bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c128833e1f1b2f2a81cc9796fb6dee1675fa39b5a1c0dc266f6a5bf7dab6c98e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:05Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 09:00:39.137156 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 09:00:39.139528 1 observer_polling.go:159] Starting file observer\\\\nI0228 09:00:39.179174 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 09:00:39.184463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 09:01:05.334768 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 09:01:05.335063 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5f7ed00388bdbf05be73ad0d372e977cbfc875eeb82ae16cf1ef5a834d59903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14cfe234b07ea83736ecdebb6e64914bceec4444d7faf3d57168e8bf6d1ab315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.854952 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.870333 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.883412 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:00 crc kubenswrapper[4996]: I0228 09:03:00.910882 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:40Z\\\",\\\"message\\\":\\\"270 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0228 09:02:40.067855 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller\\\\nI0228 09:02:40.067863 7270 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 09:02:40.067871 7270 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0228 09:02:40.067865 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 11.011µs\\\\nI0228 09:02:40.067924 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller\\\\nI0228 09:02:40.067958 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 36.171µs\\\\nI0228 09:02:40.067990 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller\\\\nI0228 09:02:40.068038 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 47.572µs\\\\nF0228 09:02:40.067925 7270 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start de\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:00Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.032939 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.033025 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.033044 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.033172 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:01 crc kubenswrapper[4996]: E0228 09:03:01.033277 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:03:01 crc kubenswrapper[4996]: E0228 09:03:01.033350 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:03:01 crc kubenswrapper[4996]: E0228 09:03:01.033443 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:03:01 crc kubenswrapper[4996]: E0228 09:03:01.033487 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.592074 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-snglm_6ed5a0c7-4cae-4140-be04-b7a0f3899920/kube-multus/0.log" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.593321 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-snglm" event={"ID":"6ed5a0c7-4cae-4140-be04-b7a0f3899920","Type":"ContainerStarted","Data":"f9163596ea18ff2974cb93f682ea825211ebda9e39a3d64a116037ee105d6806"} Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.629157 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"098b3873-4714-4c6c-9e7e-16e08633ae74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f9fc9e07077608cb589897f0c8073a2a69c367d66b1ad3b581d792009dcfda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8c1bcb6e17e16db84510ebcca9eb4a6ed36451fd7808f8f5eb012b9791b165c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d425aa49b892e1fb73a6fd336b301bf035aecf218dd673260a4435f35262af1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6a8d1d5232c9dd16b7c06cc21980b10e05c009d7104708e5983791838815640\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2789ff9d52c1ce89435b29b9c5493e92f7fe829bc2fea54b0c3b9668600c04cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16deb3b99c949461c71948d17a4e86679eb7bf362ac3e5fec7b2678552a173b4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81e3370951d24a12b58f332ac9a201e9d918203593d1a64623b899de6be045fe\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4ba719f90f53b09221a843b9cc52fd70950c00131cba0ac2edb5034e367ea07b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.649197 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"10a0fde0-f278-4f9f-ae6f-3a036e85a6eb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0228 09:01:33.733554 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 09:01:33.733712 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 09:01:33.734975 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2469229100/tls.crt::/tmp/serving-cert-2469229100/tls.key\\\\\\\"\\\\nI0228 09:01:34.144406 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 09:01:34.147884 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 09:01:34.147920 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 09:01:34.147962 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 09:01:34.147973 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 09:01:34.154363 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0228 09:01:34.154398 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154407 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 09:01:34.154416 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0228 09:01:34.154410 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 09:01:34.154425 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 09:01:34.154446 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 09:01:34.154455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 09:01:34.157731 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:01:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.668414 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.681364 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.693644 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vsncw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c160bed-3a16-439b-b4b7-130d2cba6252\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33a20680f09b51b47109100a4bd6b410b83a7a27330b10efb8b3f15b2b071700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5kpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vsncw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.710481 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ed16ad2e-d428-413f-8052-78a3fd2616bb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c128833e1f1b2f2a81cc9796fb6dee1675fa39b5a1c0dc266f6a5bf7dab6c98e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fb46ab87d78e6bf16a869e8e487eca6d7a1bc67faf7e280327f4836bde85a541\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T09:01:05Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 09:00:39.137156 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 09:00:39.139528 1 observer_polling.go:159] Starting file observer\\\\nI0228 09:00:39.179174 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 09:00:39.184463 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 09:01:05.334768 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 09:01:05.335063 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5f7ed00388bdbf05be73ad0d372e977cbfc875eeb82ae16cf1ef5a834d59903\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://14cfe234b07ea83736ecdebb6e64914bceec4444d7faf3d57168e8bf6d1ab315\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.728049 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5fc31c43144a1139234620464c4e21ba181c27366f4c07e1a37ced9a9125113\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.752450 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e7ef261-12c5-4b48-b5e1-32dcaf0f4277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://796d4f91fb85401d5c00b487380f6cdf3790f750064af4ce0a2dd80202dceb43\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea78dc22c4f502f896b80730cfae82577323589cb45bfee078dba988f725357e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7efd4872816821496d45b644363e8002fd65976eb4038d107974becc5fea0024\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://57d6ca26703721a58490a5756268c0e0a7d00fe39284094be1e43b791384f753\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f226624c876b2606444dc6b03774e1129c0678f816b504570a9f7129c6f57311\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://66871051a8563d0e0b81f9c73ed9ca3226a1b3e67c9217593193708e055d719f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a8e8855e23e088ed0df41b75185c0d7d6f85aad0d358c79863162821693e4ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zwkcp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ddgnd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.770277 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a98c14ee-40d6-4e30-9390-154743a75c63\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4db41d821e2ee68c9e76785945041533921756ba9d46f6749e608b5391d42026\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jfhll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jg4sj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.794448 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6730cd9d-a0be-4a00-966e-f936e7b888b6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:02:40Z\\\",\\\"message\\\":\\\"270 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0228 09:02:40.067855 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-operator-lifecycle-manager in Admin Network Policy controller\\\\nI0228 09:02:40.067863 7270 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 09:02:40.067871 7270 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0228 09:02:40.067865 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-operator-lifecycle-manager Admin Network Policy controller: took 11.011µs\\\\nI0228 09:02:40.067924 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-vsphere-infra in Admin Network Policy controller\\\\nI0228 09:02:40.067958 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-vsphere-infra Admin Network Policy controller: took 36.171µs\\\\nI0228 09:02:40.067990 7270 admin_network_policy_namespace.go:53] Processing sync for Namespace openshift-config-managed in Admin Network Policy controller\\\\nI0228 09:02:40.068038 7270 admin_network_policy_namespace.go:56] Finished syncing Namespace openshift-config-managed Admin Network Policy controller: took 47.572µs\\\\nF0228 09:02:40.067925 7270 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start de\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:39Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hjj82_openshift-ovn-kubernetes(6730cd9d-a0be-4a00-966e-f936e7b888b6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:02:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2k5h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hjj82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.811253 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.826727 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d7hbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88053881-36e8-4bf7-b911-d2457f8bca30\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d489f9a866ec97de3cf627605e1291147acf51e12a23cde8952a4553b18f6662\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bz55l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:18Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d7hbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.841066 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326e8318-b5b5-4d7b-a838-01d28808161b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:25Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pdk24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:25Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-9n7bm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.857004 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eb7eb3b0-5d01-4163-b338-86c9be0e6dd6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:00:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b052d2385ced8432d3ef836df06e53e5cddc8cdf549c8b83719e06526a0cd9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fee7bb94b92e6dcb4ada0cfbb6fcd25ee6800820e3d1ec2d28fa55ac405c27d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3ee97da3b6d5344dbdf716bcd2ad26eb73f3814a680f4ca76ed40222e9a4a38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:00:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83383bc28500bd80dabf45e1c490811d9495809cfdaf6450cf575a53b3f7aaf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83383bc28500bd80dabf45e1c490811d9495809cfdaf6450cf575a53b3f7aaf4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T09:00:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T09:00:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:00:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.872869 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26505c3b701e11461b843245783d58d206b28687ac9c94e78bfc67453eeefb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.891848 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T09:01:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c13a02f86efbc9a53aad950dbddf4aa907e387dbe11ae0712d5483dcc2a878\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://581c848b8fd56450d1056da93ad1010d523a5f128756604764038d551ea9151b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:01:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.910374 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-snglm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6ed5a0c7-4cae-4140-be04-b7a0f3899920\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:03:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9163596ea18ff2974cb93f682ea825211ebda9e39a3d64a116037ee105d6806\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T09:03:00Z\\\",\\\"message\\\":\\\"2026-02-28T09:02:15+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5fa9a50d-9c69-4135-b3b5-347bace01dd8\\\\n2026-02-28T09:02:15+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5fa9a50d-9c69-4135-b3b5-347bace01dd8 to /host/opt/cni/bin/\\\\n2026-02-28T09:02:15Z [verbose] multus-daemon started\\\\n2026-02-28T09:02:15Z [verbose] Readiness Indicator file check\\\\n2026-02-28T09:03:00Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T09:02:13Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:03:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cl8vk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:11Z\\\"}}\" for pod \"openshift-multus\"/\"multus-snglm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:01 crc kubenswrapper[4996]: I0228 09:03:01.922694 4996 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"20e2a5c8-d6c8-4512-a359-749b6b66d989\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T09:02:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5364c00709385ffa73ac4586b38ace47f31f30298213332398546c919bc4e8fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3e99b43f80f92e75812bd80a9d80e0d972eb0e2b5fafef79a225d891ab71345\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T09:02:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-282z6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T09:02:24Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zf2l5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T09:03:01Z is after 2025-08-24T17:21:41Z" Feb 28 09:03:02 crc kubenswrapper[4996]: E0228 09:03:02.562905 4996 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 09:03:03 crc kubenswrapper[4996]: I0228 09:03:03.032827 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:03 crc kubenswrapper[4996]: I0228 09:03:03.032896 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:03:03 crc kubenswrapper[4996]: E0228 09:03:03.033045 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:03:03 crc kubenswrapper[4996]: I0228 09:03:03.033135 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:03:03 crc kubenswrapper[4996]: E0228 09:03:03.033473 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:03:03 crc kubenswrapper[4996]: I0228 09:03:03.033539 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:03 crc kubenswrapper[4996]: E0228 09:03:03.033673 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:03:03 crc kubenswrapper[4996]: E0228 09:03:03.033872 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:03:05 crc kubenswrapper[4996]: I0228 09:03:05.032434 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:03:05 crc kubenswrapper[4996]: I0228 09:03:05.032561 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:05 crc kubenswrapper[4996]: I0228 09:03:05.032467 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:03:05 crc kubenswrapper[4996]: E0228 09:03:05.032650 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:03:05 crc kubenswrapper[4996]: E0228 09:03:05.032926 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:03:05 crc kubenswrapper[4996]: E0228 09:03:05.032812 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:03:05 crc kubenswrapper[4996]: I0228 09:03:05.033268 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:05 crc kubenswrapper[4996]: E0228 09:03:05.033403 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.033932 4996 scope.go:117] "RemoveContainer" containerID="f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.613258 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/2.log" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.616256 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerStarted","Data":"eeb43d191b894dbe97a619103a6478b4a588c7b898dbf3d0e9cc31d76a9b6291"} Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.616752 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.646155 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-d7hbs" podStartSLOduration=103.646120682 podStartE2EDuration="1m43.646120682s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.634753732 +0000 UTC m=+150.325556633" watchObservedRunningTime="2026-02-28 09:03:06.646120682 +0000 UTC m=+150.336923533" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.702751 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-snglm" podStartSLOduration=103.702727916 podStartE2EDuration="1m43.702727916s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.702542761 +0000 UTC m=+150.393345572" watchObservedRunningTime="2026-02-28 09:03:06.702727916 +0000 UTC m=+150.393530737" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.720823 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zf2l5" podStartSLOduration=102.720803935 podStartE2EDuration="1m42.720803935s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.720398976 +0000 UTC m=+150.411201787" watchObservedRunningTime="2026-02-28 09:03:06.720803935 +0000 UTC m=+150.411606746" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.736331 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=12.736316433 podStartE2EDuration="12.736316433s" podCreationTimestamp="2026-02-28 09:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.735925704 +0000 UTC m=+150.426728515" watchObservedRunningTime="2026-02-28 09:03:06.736316433 +0000 UTC m=+150.427119244" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.839292 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vsncw" podStartSLOduration=103.839272589 podStartE2EDuration="1m43.839272589s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.839047833 +0000 UTC m=+150.529850674" watchObservedRunningTime="2026-02-28 09:03:06.839272589 +0000 UTC m=+150.530075390" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.876646 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=56.876631675 podStartE2EDuration="56.876631675s" podCreationTimestamp="2026-02-28 09:02:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.876145694 +0000 UTC m=+150.566948505" watchObservedRunningTime="2026-02-28 09:03:06.876631675 +0000 UTC m=+150.567434486" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.894331 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=59.894313756 podStartE2EDuration="59.894313756s" podCreationTimestamp="2026-02-28 09:02:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.893076496 +0000 UTC m=+150.583879307" watchObservedRunningTime="2026-02-28 09:03:06.894313756 +0000 UTC m=+150.585116567" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.924161 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ddgnd" podStartSLOduration=103.924140004 podStartE2EDuration="1m43.924140004s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.910815628 +0000 UTC m=+150.601618439" watchObservedRunningTime="2026-02-28 09:03:06.924140004 +0000 UTC m=+150.614942815" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.924524 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podStartSLOduration=103.924519252 podStartE2EDuration="1m43.924519252s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.923746865 +0000 UTC m=+150.614549686" watchObservedRunningTime="2026-02-28 09:03:06.924519252 +0000 UTC m=+150.615322073" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.945235 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podStartSLOduration=103.945213204 podStartE2EDuration="1m43.945213204s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.944914647 +0000 UTC m=+150.635717478" watchObservedRunningTime="2026-02-28 09:03:06.945213204 +0000 UTC m=+150.636016015" Feb 28 09:03:06 crc kubenswrapper[4996]: I0228 09:03:06.961890 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=25.96186299 podStartE2EDuration="25.96186299s" podCreationTimestamp="2026-02-28 09:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:06.961344488 +0000 UTC m=+150.652147309" watchObservedRunningTime="2026-02-28 09:03:06.96186299 +0000 UTC m=+150.652665811" Feb 28 09:03:07 crc kubenswrapper[4996]: I0228 09:03:07.032954 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:03:07 crc kubenswrapper[4996]: I0228 09:03:07.032989 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:07 crc kubenswrapper[4996]: I0228 09:03:07.033022 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:07 crc kubenswrapper[4996]: I0228 09:03:07.033060 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:03:07 crc kubenswrapper[4996]: E0228 09:03:07.034030 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:03:07 crc kubenswrapper[4996]: E0228 09:03:07.034194 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:03:07 crc kubenswrapper[4996]: E0228 09:03:07.034366 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:03:07 crc kubenswrapper[4996]: E0228 09:03:07.034439 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:03:07 crc kubenswrapper[4996]: I0228 09:03:07.243619 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9n7bm"] Feb 28 09:03:07 crc kubenswrapper[4996]: E0228 09:03:07.564573 4996 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 09:03:07 crc kubenswrapper[4996]: I0228 09:03:07.622208 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:07 crc kubenswrapper[4996]: E0228 09:03:07.623933 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:03:09 crc kubenswrapper[4996]: I0228 09:03:09.033057 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:03:09 crc kubenswrapper[4996]: E0228 09:03:09.033225 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:03:09 crc kubenswrapper[4996]: I0228 09:03:09.033064 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:09 crc kubenswrapper[4996]: I0228 09:03:09.033307 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:03:09 crc kubenswrapper[4996]: I0228 09:03:09.033394 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:09 crc kubenswrapper[4996]: E0228 09:03:09.033357 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:03:09 crc kubenswrapper[4996]: E0228 09:03:09.033553 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:03:09 crc kubenswrapper[4996]: E0228 09:03:09.033655 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.340526 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.340596 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.340614 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.340638 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.340657 4996 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T09:03:10Z","lastTransitionTime":"2026-02-28T09:03:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.406908 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8"] Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.407566 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.411726 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.412135 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.412752 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.414362 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.560113 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8d67c69-c500-479d-a056-2baa6b947b75-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.560178 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a8d67c69-c500-479d-a056-2baa6b947b75-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.560210 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8d67c69-c500-479d-a056-2baa6b947b75-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.560231 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8d67c69-c500-479d-a056-2baa6b947b75-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.560287 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a8d67c69-c500-479d-a056-2baa6b947b75-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.661504 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a8d67c69-c500-479d-a056-2baa6b947b75-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.661616 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8d67c69-c500-479d-a056-2baa6b947b75-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.661675 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a8d67c69-c500-479d-a056-2baa6b947b75-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.661732 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8d67c69-c500-479d-a056-2baa6b947b75-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.661769 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8d67c69-c500-479d-a056-2baa6b947b75-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.661826 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/a8d67c69-c500-479d-a056-2baa6b947b75-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.661661 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/a8d67c69-c500-479d-a056-2baa6b947b75-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.663495 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a8d67c69-c500-479d-a056-2baa6b947b75-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.673391 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8d67c69-c500-479d-a056-2baa6b947b75-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.690707 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8d67c69-c500-479d-a056-2baa6b947b75-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rw4x8\" (UID: \"a8d67c69-c500-479d-a056-2baa6b947b75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: I0228 09:03:10.729713 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" Feb 28 09:03:10 crc kubenswrapper[4996]: W0228 09:03:10.750281 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d67c69_c500_479d_a056_2baa6b947b75.slice/crio-fd67a6c891badee12d57f1dad7be7747700eb9cc40e5e477a4811284f18e0367 WatchSource:0}: Error finding container fd67a6c891badee12d57f1dad7be7747700eb9cc40e5e477a4811284f18e0367: Status 404 returned error can't find the container with id fd67a6c891badee12d57f1dad7be7747700eb9cc40e5e477a4811284f18e0367 Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.033105 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.033174 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.033306 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:03:11 crc kubenswrapper[4996]: E0228 09:03:11.033466 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.033546 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:03:11 crc kubenswrapper[4996]: E0228 09:03:11.033635 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 09:03:11 crc kubenswrapper[4996]: E0228 09:03:11.034074 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9n7bm" podUID="326e8318-b5b5-4d7b-a838-01d28808161b" Feb 28 09:03:11 crc kubenswrapper[4996]: E0228 09:03:11.034178 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.082245 4996 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.092839 4996 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.639463 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" event={"ID":"a8d67c69-c500-479d-a056-2baa6b947b75","Type":"ContainerStarted","Data":"a372283e8dfba493eeab53d6055e2e84f44158fc8cb6f5f61014e13d297861c6"} Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.639890 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" event={"ID":"a8d67c69-c500-479d-a056-2baa6b947b75","Type":"ContainerStarted","Data":"fd67a6c891badee12d57f1dad7be7747700eb9cc40e5e477a4811284f18e0367"} Feb 28 09:03:11 crc kubenswrapper[4996]: I0228 09:03:11.660147 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rw4x8" podStartSLOduration=108.66004014 podStartE2EDuration="1m48.66004014s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:11.65922706 +0000 UTC m=+155.350029931" watchObservedRunningTime="2026-02-28 09:03:11.66004014 +0000 UTC m=+155.350842951" Feb 28 09:03:12 crc kubenswrapper[4996]: I0228 09:03:12.049476 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 28 09:03:12 crc kubenswrapper[4996]: I0228 09:03:12.655205 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.032980 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.033088 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.033356 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.033351 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.040214 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.040324 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.040402 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.044335 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.047532 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 09:03:13 crc kubenswrapper[4996]: I0228 09:03:13.047840 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.406255 4996 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.469298 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=8.469265485 podStartE2EDuration="8.469265485s" podCreationTimestamp="2026-02-28 09:03:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:12.712501806 +0000 UTC m=+156.403304637" watchObservedRunningTime="2026-02-28 09:03:20.469265485 +0000 UTC m=+164.160068356" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.470706 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hpmk6"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.473286 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.476869 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.480128 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.481229 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.483186 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.485981 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xd5f6"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.486845 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.488102 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.488965 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cqrkh"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.489783 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.499071 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.499362 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.499474 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.499363 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.499980 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.500137 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.500172 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.500275 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.500365 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.500592 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.500836 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.500031 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.501358 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.501589 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.501738 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502044 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502133 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502342 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502391 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502511 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502656 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502712 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502797 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502937 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.502996 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5qvz8"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.503123 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.503276 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.503411 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.503415 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.503568 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.504084 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.512676 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g92n"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.513906 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-584vx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.514382 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.519042 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.520799 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.521123 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.521291 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.526141 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.528099 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.528705 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.533822 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.534382 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.534439 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4lxnc"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.534628 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.534798 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4lxnc" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.534965 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.535077 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.535377 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.535434 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.535588 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.535861 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.535392 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.535970 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.536056 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.536078 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.536298 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.536349 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537283 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537471 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.542101 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-plkrt"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.542653 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-v49dx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537498 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.542883 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537540 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.542986 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537639 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537732 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537776 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537843 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.537974 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.555568 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.560308 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.560424 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.560700 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.560918 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.561560 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.562454 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.563196 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.564130 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.564308 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rzg82"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.564392 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.567150 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.568172 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.568515 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.568643 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.568703 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.569338 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.569352 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.581620 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.581674 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.581672 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589601 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db230af9-fc19-435e-84f0-1751b4a23f15-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589646 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49dadef9-e7d3-492d-859f-a97b88a10d02-audit-dir\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589673 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-config\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589697 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-config\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589722 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db230af9-fc19-435e-84f0-1751b4a23f15-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589746 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-encryption-config\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589774 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngv89\" (UniqueName: \"kubernetes.io/projected/db230af9-fc19-435e-84f0-1751b4a23f15-kube-api-access-ngv89\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589827 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-etcd-client\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589852 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn7lz\" (UniqueName: \"kubernetes.io/projected/49dadef9-e7d3-492d-859f-a97b88a10d02-kube-api-access-qn7lz\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589876 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-client-ca\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589904 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-image-import-ca\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589926 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-serving-cert\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589946 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea0533b-1941-4998-8989-13f7f962a294-serving-cert\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589962 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-etcd-client\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589994 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607121 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f51a22df-16fd-4f58-85dd-af4d0fc97752-images\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607173 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-serving-cert\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607239 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnx2j\" (UniqueName: \"kubernetes.io/projected/86bd2a67-a214-4aa9-afcd-8b93659acc07-kube-api-access-gnx2j\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607265 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-audit-policies\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607286 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51a22df-16fd-4f58-85dd-af4d0fc97752-config\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607316 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f51a22df-16fd-4f58-85dd-af4d0fc97752-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607353 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86bd2a67-a214-4aa9-afcd-8b93659acc07-node-pullsecrets\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607381 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-audit\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607404 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86bd2a67-a214-4aa9-afcd-8b93659acc07-audit-dir\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607427 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607454 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-etcd-serving-ca\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607483 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbp4h\" (UniqueName: \"kubernetes.io/projected/f51a22df-16fd-4f58-85dd-af4d0fc97752-kube-api-access-vbp4h\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607510 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dl6r\" (UniqueName: \"kubernetes.io/projected/4ea0533b-1941-4998-8989-13f7f962a294-kube-api-access-8dl6r\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607546 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607574 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.607599 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-encryption-config\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.591276 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.591908 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.608466 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wt42s"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.608914 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.609276 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hpmk6"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.609377 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.591978 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.591558 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.610368 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.610805 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.610811 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.589625 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.595546 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.595670 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.595700 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.595758 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.605189 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.605233 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.605307 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.605338 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.605366 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.605458 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.610915 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.610919 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.611035 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.611057 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.611102 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.611114 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.612417 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.613779 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g92n"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.625905 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.626047 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.626404 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.626691 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.626839 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.626970 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.627147 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.631806 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.637558 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.639045 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.640334 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.642153 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.644653 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.645090 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-btfgz"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.658169 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.659436 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k6wjx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.660363 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.661442 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.662139 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.659603 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.662561 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.662575 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.663279 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4lxnc"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.666995 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5qvz8"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.667116 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v49dx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.667178 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-584vx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.667248 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-plkrt"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.667311 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.667377 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xd5f6"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.667440 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.667506 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.664096 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.663912 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.680194 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.680252 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rzg82"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.682693 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.683529 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.684270 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.686844 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.690717 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.696539 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-47v8f"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.696933 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2b9vl"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.697184 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.697474 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.697898 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.697610 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.698190 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.698323 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.699570 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.700264 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.703629 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b4sg2"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.704357 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.704790 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9599z"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.705572 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.706077 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.706519 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.707449 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.707898 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708434 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-dir\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708461 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7dcb17a-35c9-45cd-a696-faa52e68849e-webhook-cert\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708487 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbp4h\" (UniqueName: \"kubernetes.io/projected/f51a22df-16fd-4f58-85dd-af4d0fc97752-kube-api-access-vbp4h\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708503 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-config\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708540 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dl6r\" (UniqueName: \"kubernetes.io/projected/4ea0533b-1941-4998-8989-13f7f962a294-kube-api-access-8dl6r\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708588 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-ca\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708734 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708787 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-client-ca\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708827 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708854 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a893a3e-768a-44b4-9ed6-86d318210be3-serving-cert\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708880 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708912 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-serving-cert\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.708965 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-config\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709030 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709061 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc06c6a2-cc61-451c-b556-7cda3df46b14-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709091 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/21bad419-219a-4957-9239-8b7583268ed1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-whfkp\" (UID: \"21bad419-219a-4957-9239-8b7583268ed1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709172 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-encryption-config\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709243 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709273 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-config\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709303 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709329 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709351 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9507b95b-ce08-4e04-b3aa-6bb55279c631-serving-cert\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709376 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-client\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709400 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7dcb17a-35c9-45cd-a696-faa52e68849e-apiservice-cert\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709429 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wznwb\" (UniqueName: \"kubernetes.io/projected/9507b95b-ce08-4e04-b3aa-6bb55279c631-kube-api-access-wznwb\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709456 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/161bc0a4-0094-4627-9e43-ad727e5102b7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jrzvn\" (UID: \"161bc0a4-0094-4627-9e43-ad727e5102b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709479 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nctv\" (UniqueName: \"kubernetes.io/projected/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-kube-api-access-2nctv\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709503 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709525 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23d763e1-10e0-4477-a48b-69d0f20032fb-signing-key\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709551 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db230af9-fc19-435e-84f0-1751b4a23f15-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709579 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-serving-cert\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709603 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709614 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709639 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc06c6a2-cc61-451c-b556-7cda3df46b14-config\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709720 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-config\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709746 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-config\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709811 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49dadef9-e7d3-492d-859f-a97b88a10d02-audit-dir\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709831 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-oauth-serving-cert\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709847 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-service-ca\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709868 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx25t\" (UniqueName: \"kubernetes.io/projected/eb817855-81f3-4906-9fc8-6d4d02a8ca98-kube-api-access-qx25t\") pod \"downloads-7954f5f757-4lxnc\" (UID: \"eb817855-81f3-4906-9fc8-6d4d02a8ca98\") " pod="openshift-console/downloads-7954f5f757-4lxnc" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709906 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49dadef9-e7d3-492d-859f-a97b88a10d02-audit-dir\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709914 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.709942 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-trusted-ca-bundle\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710019 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db230af9-fc19-435e-84f0-1751b4a23f15-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710072 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710100 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbft5\" (UniqueName: \"kubernetes.io/projected/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-kube-api-access-pbft5\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710131 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710222 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-encryption-config\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710332 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngv89\" (UniqueName: \"kubernetes.io/projected/db230af9-fc19-435e-84f0-1751b4a23f15-kube-api-access-ngv89\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710588 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710721 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d7692c9-64ce-41eb-a54c-9217e614a670-serving-cert\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710811 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db230af9-fc19-435e-84f0-1751b4a23f15-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710901 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710936 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.710789 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711198 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-etcd-client\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711216 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-config\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711262 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/633d3a0b-3505-49f8-a777-c785ec1d020b-machine-approver-tls\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711373 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a893a3e-768a-44b4-9ed6-86d318210be3-config\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711409 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc06c6a2-cc61-451c-b556-7cda3df46b14-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711424 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-config\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711463 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn7lz\" (UniqueName: \"kubernetes.io/projected/49dadef9-e7d3-492d-859f-a97b88a10d02-kube-api-access-qn7lz\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711489 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711497 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711642 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-client-ca\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711689 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-serving-cert\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711844 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-image-import-ca\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711916 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633d3a0b-3505-49f8-a777-c785ec1d020b-auth-proxy-config\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711939 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.711980 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23d763e1-10e0-4477-a48b-69d0f20032fb-signing-cabundle\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712035 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea0533b-1941-4998-8989-13f7f962a294-serving-cert\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712068 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-etcd-client\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712108 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a893a3e-768a-44b4-9ed6-86d318210be3-trusted-ca\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712126 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhjmw\" (UniqueName: \"kubernetes.io/projected/2d5c455e-6954-4ad7-994d-a73049de9b62-kube-api-access-dhjmw\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712143 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7dcb17a-35c9-45cd-a696-faa52e68849e-tmpfs\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712283 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckpzl\" (UniqueName: \"kubernetes.io/projected/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-kube-api-access-ckpzl\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712300 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/633d3a0b-3505-49f8-a777-c785ec1d020b-config\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712320 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712390 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-client-ca\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712508 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f51a22df-16fd-4f58-85dd-af4d0fc97752-images\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712633 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj6mj\" (UniqueName: \"kubernetes.io/projected/9d7692c9-64ce-41eb-a54c-9217e614a670-kube-api-access-fj6mj\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712756 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-config\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712876 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-policies\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712934 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-serving-cert\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712963 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-service-ca-bundle\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.712987 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcfqn\" (UniqueName: \"kubernetes.io/projected/21bad419-219a-4957-9239-8b7583268ed1-kube-api-access-vcfqn\") pod \"package-server-manager-789f6589d5-whfkp\" (UID: \"21bad419-219a-4957-9239-8b7583268ed1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713066 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713092 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-config\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713121 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51a22df-16fd-4f58-85dd-af4d0fc97752-config\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713158 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnx2j\" (UniqueName: \"kubernetes.io/projected/86bd2a67-a214-4aa9-afcd-8b93659acc07-kube-api-access-gnx2j\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713181 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-audit-policies\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713213 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqcp4\" (UniqueName: \"kubernetes.io/projected/e7dcb17a-35c9-45cd-a696-faa52e68849e-kube-api-access-xqcp4\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713244 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f51a22df-16fd-4f58-85dd-af4d0fc97752-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713260 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-nf6hw"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713261 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-image-import-ca\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713269 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.715431 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js6nw\" (UniqueName: \"kubernetes.io/projected/161bc0a4-0094-4627-9e43-ad727e5102b7-kube-api-access-js6nw\") pod \"cluster-samples-operator-665b6dd947-jrzvn\" (UID: \"161bc0a4-0094-4627-9e43-ad727e5102b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.715463 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q64rn\" (UniqueName: \"kubernetes.io/projected/5a893a3e-768a-44b4-9ed6-86d318210be3-kube-api-access-q64rn\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713460 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713946 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-audit-policies\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.714577 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-encryption-config\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.715195 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-encryption-config\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.714136 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f51a22df-16fd-4f58-85dd-af4d0fc97752-config\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.715527 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86bd2a67-a214-4aa9-afcd-8b93659acc07-node-pullsecrets\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.715646 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-oauth-config\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.713399 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.715786 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-service-ca\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.715820 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2bw\" (UniqueName: \"kubernetes.io/projected/633d3a0b-3505-49f8-a777-c785ec1d020b-kube-api-access-vg2bw\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.715990 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.716081 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-audit\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.716122 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-serving-cert\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.716149 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86bd2a67-a214-4aa9-afcd-8b93659acc07-audit-dir\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.716266 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/86bd2a67-a214-4aa9-afcd-8b93659acc07-audit-dir\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.716298 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.716412 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.716864 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/49dadef9-e7d3-492d-859f-a97b88a10d02-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.716916 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-audit\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.717789 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f51a22df-16fd-4f58-85dd-af4d0fc97752-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.717835 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f51a22df-16fd-4f58-85dd-af4d0fc97752-images\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.717844 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llhq5\" (UniqueName: \"kubernetes.io/projected/23d763e1-10e0-4477-a48b-69d0f20032fb-kube-api-access-llhq5\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.717883 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.717921 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-etcd-serving-ca\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.717947 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e53da0-501a-4dae-9a16-ef737205f6c3-serving-cert\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.717978 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqnj6\" (UniqueName: \"kubernetes.io/projected/84e53da0-501a-4dae-9a16-ef737205f6c3-kube-api-access-bqnj6\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.718102 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86bd2a67-a214-4aa9-afcd-8b93659acc07-node-pullsecrets\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.718525 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/86bd2a67-a214-4aa9-afcd-8b93659acc07-etcd-serving-ca\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.719421 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-serving-cert\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.719918 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea0533b-1941-4998-8989-13f7f962a294-serving-cert\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.720829 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/49dadef9-e7d3-492d-859f-a97b88a10d02-etcd-client\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.721713 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.722441 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.723838 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.724135 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.724377 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db230af9-fc19-435e-84f0-1751b4a23f15-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.724836 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-smv9n"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.724999 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.725887 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.726271 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.726837 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cqrkh"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.726863 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.726912 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k6wjx"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.726927 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.726998 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.727223 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.729790 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2hzpp"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.731361 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.733369 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/86bd2a67-a214-4aa9-afcd-8b93659acc07-etcd-client\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.738865 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-f66hg"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.741135 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-47v8f"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.741172 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2b9vl"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.741289 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f66hg" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.743776 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.746111 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b4sg2"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.748598 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.748728 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.750352 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.751395 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9599z"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.752678 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.753971 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.755308 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-smv9n"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.755786 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.756573 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.757819 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wt42s"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.758956 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.760082 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.761182 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.762521 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f66hg"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.764278 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2hzpp"] Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.775493 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.795477 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.816526 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818543 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818574 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llhq5\" (UniqueName: \"kubernetes.io/projected/23d763e1-10e0-4477-a48b-69d0f20032fb-kube-api-access-llhq5\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818596 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818617 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e53da0-501a-4dae-9a16-ef737205f6c3-serving-cert\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818636 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqnj6\" (UniqueName: \"kubernetes.io/projected/84e53da0-501a-4dae-9a16-ef737205f6c3-kube-api-access-bqnj6\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818657 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-dir\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818680 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7dcb17a-35c9-45cd-a696-faa52e68849e-webhook-cert\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818711 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-config\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818731 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-ca\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818752 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818776 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-client-ca\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818787 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-dir\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818815 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a893a3e-768a-44b4-9ed6-86d318210be3-serving-cert\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818876 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818912 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-config\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818941 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-serving-cert\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818965 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc06c6a2-cc61-451c-b556-7cda3df46b14-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.818988 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/21bad419-219a-4957-9239-8b7583268ed1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-whfkp\" (UID: \"21bad419-219a-4957-9239-8b7583268ed1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.819640 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-ca\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.819646 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-config\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.819916 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-client-ca\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.819967 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.819985 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-config\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820029 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9507b95b-ce08-4e04-b3aa-6bb55279c631-serving-cert\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820045 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-config\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820054 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820131 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820156 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-client\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820175 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7dcb17a-35c9-45cd-a696-faa52e68849e-apiservice-cert\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820202 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nctv\" (UniqueName: \"kubernetes.io/projected/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-kube-api-access-2nctv\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820225 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820246 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23d763e1-10e0-4477-a48b-69d0f20032fb-signing-key\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820271 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wznwb\" (UniqueName: \"kubernetes.io/projected/9507b95b-ce08-4e04-b3aa-6bb55279c631-kube-api-access-wznwb\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820307 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/161bc0a4-0094-4627-9e43-ad727e5102b7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jrzvn\" (UID: \"161bc0a4-0094-4627-9e43-ad727e5102b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820328 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc06c6a2-cc61-451c-b556-7cda3df46b14-config\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820347 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-serving-cert\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820363 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820380 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-service-ca\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820403 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx25t\" (UniqueName: \"kubernetes.io/projected/eb817855-81f3-4906-9fc8-6d4d02a8ca98-kube-api-access-qx25t\") pod \"downloads-7954f5f757-4lxnc\" (UID: \"eb817855-81f3-4906-9fc8-6d4d02a8ca98\") " pod="openshift-console/downloads-7954f5f757-4lxnc" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820423 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-oauth-serving-cert\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820443 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820459 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-trusted-ca-bundle\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820483 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbft5\" (UniqueName: \"kubernetes.io/projected/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-kube-api-access-pbft5\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820501 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820528 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d7692c9-64ce-41eb-a54c-9217e614a670-serving-cert\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820556 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820575 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820593 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/633d3a0b-3505-49f8-a777-c785ec1d020b-machine-approver-tls\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820608 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a893a3e-768a-44b4-9ed6-86d318210be3-config\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820624 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc06c6a2-cc61-451c-b556-7cda3df46b14-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820652 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820732 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633d3a0b-3505-49f8-a777-c785ec1d020b-auth-proxy-config\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820756 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820814 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23d763e1-10e0-4477-a48b-69d0f20032fb-signing-cabundle\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820838 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a893a3e-768a-44b4-9ed6-86d318210be3-trusted-ca\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820895 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhjmw\" (UniqueName: \"kubernetes.io/projected/2d5c455e-6954-4ad7-994d-a73049de9b62-kube-api-access-dhjmw\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820923 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7dcb17a-35c9-45cd-a696-faa52e68849e-tmpfs\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.820997 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckpzl\" (UniqueName: \"kubernetes.io/projected/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-kube-api-access-ckpzl\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821048 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/633d3a0b-3505-49f8-a777-c785ec1d020b-config\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821066 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-policies\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821084 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj6mj\" (UniqueName: \"kubernetes.io/projected/9d7692c9-64ce-41eb-a54c-9217e614a670-kube-api-access-fj6mj\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821146 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-config\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821196 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-service-ca-bundle\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821230 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcfqn\" (UniqueName: \"kubernetes.io/projected/21bad419-219a-4957-9239-8b7583268ed1-kube-api-access-vcfqn\") pod \"package-server-manager-789f6589d5-whfkp\" (UID: \"21bad419-219a-4957-9239-8b7583268ed1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821306 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821332 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-config\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821398 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821443 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqcp4\" (UniqueName: \"kubernetes.io/projected/e7dcb17a-35c9-45cd-a696-faa52e68849e-kube-api-access-xqcp4\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821465 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js6nw\" (UniqueName: \"kubernetes.io/projected/161bc0a4-0094-4627-9e43-ad727e5102b7-kube-api-access-js6nw\") pod \"cluster-samples-operator-665b6dd947-jrzvn\" (UID: \"161bc0a4-0094-4627-9e43-ad727e5102b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821482 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q64rn\" (UniqueName: \"kubernetes.io/projected/5a893a3e-768a-44b4-9ed6-86d318210be3-kube-api-access-q64rn\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821515 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-config\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821524 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-oauth-config\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821543 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-service-ca\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821560 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2bw\" (UniqueName: \"kubernetes.io/projected/633d3a0b-3505-49f8-a777-c785ec1d020b-kube-api-access-vg2bw\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821617 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-service-ca\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.821799 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.822286 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.822481 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.822521 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.822552 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-oauth-serving-cert\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.822661 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.823132 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-serving-cert\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.823273 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a893a3e-768a-44b4-9ed6-86d318210be3-config\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.823396 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e53da0-501a-4dae-9a16-ef737205f6c3-config\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.823391 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.823689 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e7dcb17a-35c9-45cd-a696-faa52e68849e-tmpfs\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.823837 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.824114 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-policies\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.824238 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633d3a0b-3505-49f8-a777-c785ec1d020b-auth-proxy-config\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.824239 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5a893a3e-768a-44b4-9ed6-86d318210be3-trusted-ca\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.824403 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-trusted-ca-bundle\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.824438 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.824621 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/633d3a0b-3505-49f8-a777-c785ec1d020b-config\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.825070 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-service-ca-bundle\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.825152 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e53da0-501a-4dae-9a16-ef737205f6c3-serving-cert\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.825542 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-service-ca\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.825671 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/161bc0a4-0094-4627-9e43-ad727e5102b7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-jrzvn\" (UID: \"161bc0a4-0094-4627-9e43-ad727e5102b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.825914 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d7692c9-64ce-41eb-a54c-9217e614a670-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.826416 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.826685 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9507b95b-ce08-4e04-b3aa-6bb55279c631-serving-cert\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.826862 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/633d3a0b-3505-49f8-a777-c785ec1d020b-machine-approver-tls\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.827076 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/84e53da0-501a-4dae-9a16-ef737205f6c3-etcd-client\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.827438 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a893a3e-768a-44b4-9ed6-86d318210be3-serving-cert\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.827686 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-oauth-config\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.827880 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.827904 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.828138 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.828617 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.828668 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d7692c9-64ce-41eb-a54c-9217e614a670-serving-cert\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.829036 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.829231 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-serving-cert\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.835831 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.855817 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.896190 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.916789 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.936073 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.962222 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.975596 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 28 09:03:20 crc kubenswrapper[4996]: I0228 09:03:20.997190 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.015996 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.038416 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.056301 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.077305 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.097409 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.116064 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.137361 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.157208 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.176525 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.196388 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.216887 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.226661 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-config\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.235714 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.255841 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.276961 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.283596 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc06c6a2-cc61-451c-b556-7cda3df46b14-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.298268 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.305866 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.318432 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.336688 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.343447 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc06c6a2-cc61-451c-b556-7cda3df46b14-config\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.357264 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.377059 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.397044 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.416699 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.435976 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.457973 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.477490 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.497083 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.516499 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.536236 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.556046 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.576420 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.585364 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/23d763e1-10e0-4477-a48b-69d0f20032fb-signing-key\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.596472 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.616880 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.626143 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/23d763e1-10e0-4477-a48b-69d0f20032fb-signing-cabundle\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.636679 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.657869 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.676461 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.697623 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.703676 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e7dcb17a-35c9-45cd-a696-faa52e68849e-webhook-cert\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.708130 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e7dcb17a-35c9-45cd-a696-faa52e68849e-apiservice-cert\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.714841 4996 request.go:700] Waited for 1.015882367s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.717303 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.737254 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.757361 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.776800 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.806226 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.817038 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.825279 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/21bad419-219a-4957-9239-8b7583268ed1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-whfkp\" (UID: \"21bad419-219a-4957-9239-8b7583268ed1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.857996 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.877583 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.897770 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.916975 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.937594 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.957972 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.978067 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 28 09:03:21 crc kubenswrapper[4996]: I0228 09:03:21.996817 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.017342 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.038067 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.085477 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dl6r\" (UniqueName: \"kubernetes.io/projected/4ea0533b-1941-4998-8989-13f7f962a294-kube-api-access-8dl6r\") pod \"controller-manager-879f6c89f-cqrkh\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.104939 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.106286 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbp4h\" (UniqueName: \"kubernetes.io/projected/f51a22df-16fd-4f58-85dd-af4d0fc97752-kube-api-access-vbp4h\") pod \"machine-api-operator-5694c8668f-xd5f6\" (UID: \"f51a22df-16fd-4f58-85dd-af4d0fc97752\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.117110 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.118277 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngv89\" (UniqueName: \"kubernetes.io/projected/db230af9-fc19-435e-84f0-1751b4a23f15-kube-api-access-ngv89\") pod \"openshift-apiserver-operator-796bbdcf4f-nd447\" (UID: \"db230af9-fc19-435e-84f0-1751b4a23f15\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.137663 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.178053 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn7lz\" (UniqueName: \"kubernetes.io/projected/49dadef9-e7d3-492d-859f-a97b88a10d02-kube-api-access-qn7lz\") pod \"apiserver-7bbb656c7d-xhlhx\" (UID: \"49dadef9-e7d3-492d-859f-a97b88a10d02\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.198178 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.209565 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnx2j\" (UniqueName: \"kubernetes.io/projected/86bd2a67-a214-4aa9-afcd-8b93659acc07-kube-api-access-gnx2j\") pod \"apiserver-76f77b778f-hpmk6\" (UID: \"86bd2a67-a214-4aa9-afcd-8b93659acc07\") " pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.217629 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.239481 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.259888 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.281319 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.296694 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.303739 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.317346 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.326560 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.338357 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.341187 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.357293 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.360535 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.377408 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.380166 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cqrkh"] Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.396229 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.449843 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.450557 4996 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 28 09:03:22 crc kubenswrapper[4996]: W0228 09:03:22.453731 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ea0533b_1941_4998_8989_13f7f962a294.slice/crio-b36e1e7cb8cec066179ad872d1d7906d3cf9ff8983fe29a24ef2ed1e33ae408e WatchSource:0}: Error finding container b36e1e7cb8cec066179ad872d1d7906d3cf9ff8983fe29a24ef2ed1e33ae408e: Status 404 returned error can't find the container with id b36e1e7cb8cec066179ad872d1d7906d3cf9ff8983fe29a24ef2ed1e33ae408e Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.456483 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.479329 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.497902 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.516798 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.536938 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.559934 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.565588 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hpmk6"] Feb 28 09:03:22 crc kubenswrapper[4996]: W0228 09:03:22.572250 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86bd2a67_a214_4aa9_afcd_8b93659acc07.slice/crio-4dd2d722b39f9bc5b9c04f072c1927e3cc4e9da5637b9b5a55a6c514f724553c WatchSource:0}: Error finding container 4dd2d722b39f9bc5b9c04f072c1927e3cc4e9da5637b9b5a55a6c514f724553c: Status 404 returned error can't find the container with id 4dd2d722b39f9bc5b9c04f072c1927e3cc4e9da5637b9b5a55a6c514f724553c Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.576186 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.596914 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.616615 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.631770 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-xd5f6"] Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.636503 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 28 09:03:22 crc kubenswrapper[4996]: W0228 09:03:22.638654 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf51a22df_16fd_4f58_85dd_af4d0fc97752.slice/crio-3cf1fd44101a0ba41f2cfeda698b3669d9c3e074b131270c4336281d24b5c18c WatchSource:0}: Error finding container 3cf1fd44101a0ba41f2cfeda698b3669d9c3e074b131270c4336281d24b5c18c: Status 404 returned error can't find the container with id 3cf1fd44101a0ba41f2cfeda698b3669d9c3e074b131270c4336281d24b5c18c Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.675828 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f83c2145-81d4-4d58-b3a8-1b5b0b0bf328-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jd5qm\" (UID: \"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.681644 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" event={"ID":"f51a22df-16fd-4f58-85dd-af4d0fc97752","Type":"ContainerStarted","Data":"3cf1fd44101a0ba41f2cfeda698b3669d9c3e074b131270c4336281d24b5c18c"} Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.685572 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" event={"ID":"4ea0533b-1941-4998-8989-13f7f962a294","Type":"ContainerStarted","Data":"bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718"} Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.685613 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" event={"ID":"4ea0533b-1941-4998-8989-13f7f962a294","Type":"ContainerStarted","Data":"b36e1e7cb8cec066179ad872d1d7906d3cf9ff8983fe29a24ef2ed1e33ae408e"} Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.686695 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.690416 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" event={"ID":"86bd2a67-a214-4aa9-afcd-8b93659acc07","Type":"ContainerStarted","Data":"4dd2d722b39f9bc5b9c04f072c1927e3cc4e9da5637b9b5a55a6c514f724553c"} Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.691434 4996 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-cqrkh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.691481 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" podUID="4ea0533b-1941-4998-8989-13f7f962a294" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.692454 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llhq5\" (UniqueName: \"kubernetes.io/projected/23d763e1-10e0-4477-a48b-69d0f20032fb-kube-api-access-llhq5\") pod \"service-ca-9c57cc56f-47v8f\" (UID: \"23d763e1-10e0-4477-a48b-69d0f20032fb\") " pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.702686 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.712241 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqnj6\" (UniqueName: \"kubernetes.io/projected/84e53da0-501a-4dae-9a16-ef737205f6c3-kube-api-access-bqnj6\") pod \"etcd-operator-b45778765-rzg82\" (UID: \"84e53da0-501a-4dae-9a16-ef737205f6c3\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.733936 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wznwb\" (UniqueName: \"kubernetes.io/projected/9507b95b-ce08-4e04-b3aa-6bb55279c631-kube-api-access-wznwb\") pod \"route-controller-manager-6576b87f9c-w6cs4\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.735182 4996 request.go:700] Waited for 1.914250283s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.755623 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nctv\" (UniqueName: \"kubernetes.io/projected/372b0658-85b2-4b4d-b3ee-c4692ea9f21a-kube-api-access-2nctv\") pod \"openshift-config-operator-7777fb866f-584vx\" (UID: \"372b0658-85b2-4b4d-b3ee-c4692ea9f21a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.776247 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx25t\" (UniqueName: \"kubernetes.io/projected/eb817855-81f3-4906-9fc8-6d4d02a8ca98-kube-api-access-qx25t\") pod \"downloads-7954f5f757-4lxnc\" (UID: \"eb817855-81f3-4906-9fc8-6d4d02a8ca98\") " pod="openshift-console/downloads-7954f5f757-4lxnc" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.793137 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2bw\" (UniqueName: \"kubernetes.io/projected/633d3a0b-3505-49f8-a777-c785ec1d020b-kube-api-access-vg2bw\") pod \"machine-approver-56656f9798-lmknx\" (UID: \"633d3a0b-3505-49f8-a777-c785ec1d020b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.806674 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.814288 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx"] Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.815623 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447"] Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.818760 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc06c6a2-cc61-451c-b556-7cda3df46b14-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrqvg\" (UID: \"dc06c6a2-cc61-451c-b556-7cda3df46b14\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.829132 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj6mj\" (UniqueName: \"kubernetes.io/projected/9d7692c9-64ce-41eb-a54c-9217e614a670-kube-api-access-fj6mj\") pod \"authentication-operator-69f744f599-5qvz8\" (UID: \"9d7692c9-64ce-41eb-a54c-9217e614a670\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:22 crc kubenswrapper[4996]: W0228 09:03:22.829133 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb230af9_fc19_435e_84f0_1751b4a23f15.slice/crio-37a5f495696699461c5e1bda7c4daa41bfc3bfddac7c85076e384f690146a312 WatchSource:0}: Error finding container 37a5f495696699461c5e1bda7c4daa41bfc3bfddac7c85076e384f690146a312: Status 404 returned error can't find the container with id 37a5f495696699461c5e1bda7c4daa41bfc3bfddac7c85076e384f690146a312 Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.863157 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhjmw\" (UniqueName: \"kubernetes.io/projected/2d5c455e-6954-4ad7-994d-a73049de9b62-kube-api-access-dhjmw\") pod \"oauth-openshift-558db77b4-4g92n\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.864990 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.874369 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.880320 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckpzl\" (UniqueName: \"kubernetes.io/projected/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-kube-api-access-ckpzl\") pod \"console-f9d7485db-v49dx\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.898471 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js6nw\" (UniqueName: \"kubernetes.io/projected/161bc0a4-0094-4627-9e43-ad727e5102b7-kube-api-access-js6nw\") pod \"cluster-samples-operator-665b6dd947-jrzvn\" (UID: \"161bc0a4-0094-4627-9e43-ad727e5102b7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.901257 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-47v8f"] Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.920754 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbft5\" (UniqueName: \"kubernetes.io/projected/c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa-kube-api-access-pbft5\") pod \"openshift-controller-manager-operator-756b6f6bc6-p979b\" (UID: \"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.929330 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.934543 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcfqn\" (UniqueName: \"kubernetes.io/projected/21bad419-219a-4957-9239-8b7583268ed1-kube-api-access-vcfqn\") pod \"package-server-manager-789f6589d5-whfkp\" (UID: \"21bad419-219a-4957-9239-8b7583268ed1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.950976 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqcp4\" (UniqueName: \"kubernetes.io/projected/e7dcb17a-35c9-45cd-a696-faa52e68849e-kube-api-access-xqcp4\") pod \"packageserver-d55dfcdfc-6cwnw\" (UID: \"e7dcb17a-35c9-45cd-a696-faa52e68849e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.966734 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.972469 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q64rn\" (UniqueName: \"kubernetes.io/projected/5a893a3e-768a-44b4-9ed6-86d318210be3-kube-api-access-q64rn\") pod \"console-operator-58897d9998-plkrt\" (UID: \"5a893a3e-768a-44b4-9ed6-86d318210be3\") " pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:22 crc kubenswrapper[4996]: I0228 09:03:22.992339 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.009216 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-584vx"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.011545 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.025655 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.026806 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.055436 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4lxnc" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.066740 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a17bc456-8bc4-464f-a3d4-3d9ac9985870-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.066800 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-tls\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.067808 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq2hx\" (UniqueName: \"kubernetes.io/projected/79dfa3c4-7b0b-4e92-ad1a-99daa139c082-kube-api-access-bq2hx\") pod \"dns-operator-744455d44c-wt42s\" (UID: \"79dfa3c4-7b0b-4e92-ad1a-99daa139c082\") " pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.079271 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c75c10ff-a3b2-4617-b0cd-462f598ecc90-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.079354 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6xtp\" (UniqueName: \"kubernetes.io/projected/e9f04f37-5418-49cc-9788-ce468f52375d-kube-api-access-d6xtp\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.079387 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f04f37-5418-49cc-9788-ce468f52375d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.079450 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.079489 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10f63a0f-b508-4232-ab29-f6e480875b78-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.079520 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38a7129e-8ecc-45f5-a199-b01a3d03b961-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.080042 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-metrics-certs\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.080132 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38a7129e-8ecc-45f5-a199-b01a3d03b961-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.080250 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/209c9ded-078e-4147-8f4f-652dcc9be452-service-ca-bundle\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.081414 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/10f63a0f-b508-4232-ab29-f6e480875b78-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.081459 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c75c10ff-a3b2-4617-b0cd-462f598ecc90-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.081528 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a17bc456-8bc4-464f-a3d4-3d9ac9985870-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.083970 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79dfa3c4-7b0b-4e92-ad1a-99daa139c082-metrics-tls\") pod \"dns-operator-744455d44c-wt42s\" (UID: \"79dfa3c4-7b0b-4e92-ad1a-99daa139c082\") " pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.085739 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-certificates\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086218 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086283 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-default-certificate\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086349 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086384 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s8rt\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-kube-api-access-7s8rt\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086450 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-trusted-ca\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086481 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10f63a0f-b508-4232-ab29-f6e480875b78-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086560 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlh7x\" (UniqueName: \"kubernetes.io/projected/38a7129e-8ecc-45f5-a199-b01a3d03b961-kube-api-access-vlh7x\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086616 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-bound-sa-token\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086643 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9f04f37-5418-49cc-9788-ce468f52375d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086695 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-stats-auth\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086739 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pm7h\" (UniqueName: \"kubernetes.io/projected/8e1b2a41-1776-4907-b520-c7c941c17a54-kube-api-access-2pm7h\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086792 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c75c10ff-a3b2-4617-b0cd-462f598ecc90-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086829 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj8vb\" (UniqueName: \"kubernetes.io/projected/10f63a0f-b508-4232-ab29-f6e480875b78-kube-api-access-wj8vb\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086855 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/38a7129e-8ecc-45f5-a199-b01a3d03b961-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.086904 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bp7p\" (UniqueName: \"kubernetes.io/projected/209c9ded-078e-4147-8f4f-652dcc9be452-kube-api-access-2bp7p\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.094412 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:23.59439603 +0000 UTC m=+167.285198841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.095507 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.099826 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.103827 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.117316 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.147124 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.157526 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.162706 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rzg82"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.187681 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.187907 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:23.68786011 +0000 UTC m=+167.378662921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.199392 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-tls\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.199426 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qfbf\" (UniqueName: \"kubernetes.io/projected/39aa72fe-d020-4e33-81bd-3b14cf9da392-kube-api-access-2qfbf\") pod \"migrator-59844c95c7-qlk7v\" (UID: \"39aa72fe-d020-4e33-81bd-3b14cf9da392\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.199445 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/077ab59f-d160-4842-ad20-f982d5447f5b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.201503 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq2hx\" (UniqueName: \"kubernetes.io/projected/79dfa3c4-7b0b-4e92-ad1a-99daa139c082-kube-api-access-bq2hx\") pod \"dns-operator-744455d44c-wt42s\" (UID: \"79dfa3c4-7b0b-4e92-ad1a-99daa139c082\") " pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.201527 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1939f298-410c-4eff-a94b-8f0f95fb5093-config\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.202488 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c75c10ff-a3b2-4617-b0cd-462f598ecc90-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.202513 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6xtp\" (UniqueName: \"kubernetes.io/projected/e9f04f37-5418-49cc-9788-ce468f52375d-kube-api-access-d6xtp\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.202538 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbkh6\" (UniqueName: \"kubernetes.io/projected/cca6cc42-3078-4920-9cea-9a69d9e03588-kube-api-access-cbkh6\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.202584 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctrkl\" (UniqueName: \"kubernetes.io/projected/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-kube-api-access-ctrkl\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.202602 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f04f37-5418-49cc-9788-ce468f52375d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.203185 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c75c10ff-a3b2-4617-b0cd-462f598ecc90-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204027 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204068 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10f63a0f-b508-4232-ab29-f6e480875b78-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204098 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38a7129e-8ecc-45f5-a199-b01a3d03b961-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204116 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-metrics-certs\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204144 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38a7129e-8ecc-45f5-a199-b01a3d03b961-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204173 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d26cbe03-d214-423e-a80c-ef75c798c3c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204221 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-csi-data-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204242 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/209c9ded-078e-4147-8f4f-652dcc9be452-service-ca-bundle\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204287 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/077ab59f-d160-4842-ad20-f982d5447f5b-images\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204306 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/077ab59f-d160-4842-ad20-f982d5447f5b-proxy-tls\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204337 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204350 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/10f63a0f-b508-4232-ab29-f6e480875b78-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204368 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c75c10ff-a3b2-4617-b0cd-462f598ecc90-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204388 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/626c201e-8cfd-454f-b354-e74eebd622f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204404 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cca6cc42-3078-4920-9cea-9a69d9e03588-node-bootstrap-token\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204424 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a17bc456-8bc4-464f-a3d4-3d9ac9985870-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204457 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-plugins-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204505 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79dfa3c4-7b0b-4e92-ad1a-99daa139c082-metrics-tls\") pod \"dns-operator-744455d44c-wt42s\" (UID: \"79dfa3c4-7b0b-4e92-ad1a-99daa139c082\") " pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204520 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcafa6f4-ba55-465f-928c-71b3687abd21-config-volume\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204550 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cca6cc42-3078-4920-9cea-9a69d9e03588-certs\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204591 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-registration-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204630 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-certificates\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204646 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-metrics-tls\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204662 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/510981d7-43ee-415f-aada-96bdb8eef50e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b4sg2\" (UID: \"510981d7-43ee-415f-aada-96bdb8eef50e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204677 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjwzq\" (UniqueName: \"kubernetes.io/projected/6882c5d6-656a-429c-a42e-0bb482e00220-kube-api-access-jjwzq\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204705 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6dm4\" (UniqueName: \"kubernetes.io/projected/d26cbe03-d214-423e-a80c-ef75c798c3c1-kube-api-access-m6dm4\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204720 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f-cert\") pod \"ingress-canary-f66hg\" (UID: \"4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f\") " pod="openshift-ingress-canary/ingress-canary-f66hg" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204734 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-config-volume\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204751 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85qjq\" (UniqueName: \"kubernetes.io/projected/4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f-kube-api-access-85qjq\") pod \"ingress-canary-f66hg\" (UID: \"4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f\") " pod="openshift-ingress-canary/ingress-canary-f66hg" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204766 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckn2b\" (UniqueName: \"kubernetes.io/projected/510981d7-43ee-415f-aada-96bdb8eef50e-kube-api-access-ckn2b\") pod \"multus-admission-controller-857f4d67dd-b4sg2\" (UID: \"510981d7-43ee-415f-aada-96bdb8eef50e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204805 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-socket-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204848 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204864 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-default-certificate\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204900 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.204924 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s8rt\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-kube-api-access-7s8rt\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.205993 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-trusted-ca\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.206046 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10f63a0f-b508-4232-ab29-f6e480875b78-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.206215 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlh7x\" (UniqueName: \"kubernetes.io/projected/38a7129e-8ecc-45f5-a199-b01a3d03b961-kube-api-access-vlh7x\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.206245 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j76xw\" (UniqueName: \"kubernetes.io/projected/1939f298-410c-4eff-a94b-8f0f95fb5093-kube-api-access-j76xw\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.206288 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-bound-sa-token\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.206306 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9f04f37-5418-49cc-9788-ce468f52375d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.206911 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:23.706895442 +0000 UTC m=+167.397698253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.207535 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-certificates\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.208540 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9f04f37-5418-49cc-9788-ce468f52375d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.208809 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/10f63a0f-b508-4232-ab29-f6e480875b78-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.210101 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.212723 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.214630 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcafa6f4-ba55-465f-928c-71b3687abd21-secret-volume\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.214678 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zfr6\" (UniqueName: \"kubernetes.io/projected/3f55132b-9e49-49fb-9043-aa56c455ea0f-kube-api-access-5zfr6\") pod \"control-plane-machine-set-operator-78cbb6b69f-bhdsm\" (UID: \"3f55132b-9e49-49fb-9043-aa56c455ea0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.214720 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc978\" (UniqueName: \"kubernetes.io/projected/077ab59f-d160-4842-ad20-f982d5447f5b-kube-api-access-vc978\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.214749 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-stats-auth\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.214881 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-tls\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.215269 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6882c5d6-656a-429c-a42e-0bb482e00220-profile-collector-cert\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.215318 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-mountpoint-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.215355 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pm7h\" (UniqueName: \"kubernetes.io/projected/8e1b2a41-1776-4907-b520-c7c941c17a54-kube-api-access-2pm7h\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.215375 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f55132b-9e49-49fb-9043-aa56c455ea0f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bhdsm\" (UID: \"3f55132b-9e49-49fb-9043-aa56c455ea0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.215446 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c75c10ff-a3b2-4617-b0cd-462f598ecc90-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.215470 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj8vb\" (UniqueName: \"kubernetes.io/projected/10f63a0f-b508-4232-ab29-f6e480875b78-kube-api-access-wj8vb\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.215490 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/38a7129e-8ecc-45f5-a199-b01a3d03b961-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.215515 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d26cbe03-d214-423e-a80c-ef75c798c3c1-srv-cert\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.216417 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/626c201e-8cfd-454f-b354-e74eebd622f6-proxy-tls\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.216447 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbrnw\" (UniqueName: \"kubernetes.io/projected/626c201e-8cfd-454f-b354-e74eebd622f6-kube-api-access-mbrnw\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.216472 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjllk\" (UniqueName: \"kubernetes.io/projected/ffb84f35-51cc-4424-afb6-4baed4de2542-kube-api-access-qjllk\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.216502 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bp7p\" (UniqueName: \"kubernetes.io/projected/209c9ded-078e-4147-8f4f-652dcc9be452-kube-api-access-2bp7p\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.216528 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ktq\" (UniqueName: \"kubernetes.io/projected/dcafa6f4-ba55-465f-928c-71b3687abd21-kube-api-access-66ktq\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.216559 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6882c5d6-656a-429c-a42e-0bb482e00220-srv-cert\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.216584 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1939f298-410c-4eff-a94b-8f0f95fb5093-serving-cert\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.216789 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a17bc456-8bc4-464f-a3d4-3d9ac9985870-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.217715 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38a7129e-8ecc-45f5-a199-b01a3d03b961-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.219916 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-metrics-certs\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.224017 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a17bc456-8bc4-464f-a3d4-3d9ac9985870-ca-trust-extracted\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.233373 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-trusted-ca\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.233988 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-default-certificate\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.234181 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/209c9ded-078e-4147-8f4f-652dcc9be452-service-ca-bundle\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.234953 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/209c9ded-078e-4147-8f4f-652dcc9be452-stats-auth\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.244368 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e9f04f37-5418-49cc-9788-ce468f52375d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.244753 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/10f63a0f-b508-4232-ab29-f6e480875b78-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.244770 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/79dfa3c4-7b0b-4e92-ad1a-99daa139c082-metrics-tls\") pod \"dns-operator-744455d44c-wt42s\" (UID: \"79dfa3c4-7b0b-4e92-ad1a-99daa139c082\") " pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.245108 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/38a7129e-8ecc-45f5-a199-b01a3d03b961-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.247762 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a17bc456-8bc4-464f-a3d4-3d9ac9985870-installation-pull-secrets\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.248953 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c75c10ff-a3b2-4617-b0cd-462f598ecc90-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.252963 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq2hx\" (UniqueName: \"kubernetes.io/projected/79dfa3c4-7b0b-4e92-ad1a-99daa139c082-kube-api-access-bq2hx\") pod \"dns-operator-744455d44c-wt42s\" (UID: \"79dfa3c4-7b0b-4e92-ad1a-99daa139c082\") " pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.280998 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.289935 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6xtp\" (UniqueName: \"kubernetes.io/projected/e9f04f37-5418-49cc-9788-ce468f52375d-kube-api-access-d6xtp\") pod \"kube-storage-version-migrator-operator-b67b599dd-x7xr9\" (UID: \"e9f04f37-5418-49cc-9788-ce468f52375d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.292792 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.293631 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10f63a0f-b508-4232-ab29-f6e480875b78-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318309 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318522 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j76xw\" (UniqueName: \"kubernetes.io/projected/1939f298-410c-4eff-a94b-8f0f95fb5093-kube-api-access-j76xw\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318554 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s8rt\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-kube-api-access-7s8rt\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.318603 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:23.818574044 +0000 UTC m=+167.509376855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318674 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcafa6f4-ba55-465f-928c-71b3687abd21-secret-volume\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318707 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zfr6\" (UniqueName: \"kubernetes.io/projected/3f55132b-9e49-49fb-9043-aa56c455ea0f-kube-api-access-5zfr6\") pod \"control-plane-machine-set-operator-78cbb6b69f-bhdsm\" (UID: \"3f55132b-9e49-49fb-9043-aa56c455ea0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318735 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc978\" (UniqueName: \"kubernetes.io/projected/077ab59f-d160-4842-ad20-f982d5447f5b-kube-api-access-vc978\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318764 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6882c5d6-656a-429c-a42e-0bb482e00220-profile-collector-cert\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318785 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-mountpoint-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318814 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f55132b-9e49-49fb-9043-aa56c455ea0f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bhdsm\" (UID: \"3f55132b-9e49-49fb-9043-aa56c455ea0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.318846 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d26cbe03-d214-423e-a80c-ef75c798c3c1-srv-cert\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319031 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjllk\" (UniqueName: \"kubernetes.io/projected/ffb84f35-51cc-4424-afb6-4baed4de2542-kube-api-access-qjllk\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319067 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/626c201e-8cfd-454f-b354-e74eebd622f6-proxy-tls\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319085 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbrnw\" (UniqueName: \"kubernetes.io/projected/626c201e-8cfd-454f-b354-e74eebd622f6-kube-api-access-mbrnw\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319126 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66ktq\" (UniqueName: \"kubernetes.io/projected/dcafa6f4-ba55-465f-928c-71b3687abd21-kube-api-access-66ktq\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319147 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6882c5d6-656a-429c-a42e-0bb482e00220-srv-cert\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319167 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1939f298-410c-4eff-a94b-8f0f95fb5093-serving-cert\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319204 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qfbf\" (UniqueName: \"kubernetes.io/projected/39aa72fe-d020-4e33-81bd-3b14cf9da392-kube-api-access-2qfbf\") pod \"migrator-59844c95c7-qlk7v\" (UID: \"39aa72fe-d020-4e33-81bd-3b14cf9da392\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319229 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/077ab59f-d160-4842-ad20-f982d5447f5b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319271 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1939f298-410c-4eff-a94b-8f0f95fb5093-config\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319309 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbkh6\" (UniqueName: \"kubernetes.io/projected/cca6cc42-3078-4920-9cea-9a69d9e03588-kube-api-access-cbkh6\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319331 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctrkl\" (UniqueName: \"kubernetes.io/projected/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-kube-api-access-ctrkl\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319381 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d26cbe03-d214-423e-a80c-ef75c798c3c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319403 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-csi-data-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319426 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/077ab59f-d160-4842-ad20-f982d5447f5b-images\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319446 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/077ab59f-d160-4842-ad20-f982d5447f5b-proxy-tls\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319475 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/626c201e-8cfd-454f-b354-e74eebd622f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319494 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cca6cc42-3078-4920-9cea-9a69d9e03588-node-bootstrap-token\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319518 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-plugins-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319548 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcafa6f4-ba55-465f-928c-71b3687abd21-config-volume\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319565 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cca6cc42-3078-4920-9cea-9a69d9e03588-certs\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319591 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-registration-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319623 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-metrics-tls\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319640 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/510981d7-43ee-415f-aada-96bdb8eef50e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b4sg2\" (UID: \"510981d7-43ee-415f-aada-96bdb8eef50e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319752 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f-cert\") pod \"ingress-canary-f66hg\" (UID: \"4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f\") " pod="openshift-ingress-canary/ingress-canary-f66hg" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319772 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjwzq\" (UniqueName: \"kubernetes.io/projected/6882c5d6-656a-429c-a42e-0bb482e00220-kube-api-access-jjwzq\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319794 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6dm4\" (UniqueName: \"kubernetes.io/projected/d26cbe03-d214-423e-a80c-ef75c798c3c1-kube-api-access-m6dm4\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319811 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-config-volume\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319843 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85qjq\" (UniqueName: \"kubernetes.io/projected/4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f-kube-api-access-85qjq\") pod \"ingress-canary-f66hg\" (UID: \"4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f\") " pod="openshift-ingress-canary/ingress-canary-f66hg" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319860 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckn2b\" (UniqueName: \"kubernetes.io/projected/510981d7-43ee-415f-aada-96bdb8eef50e-kube-api-access-ckn2b\") pod \"multus-admission-controller-857f4d67dd-b4sg2\" (UID: \"510981d7-43ee-415f-aada-96bdb8eef50e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319885 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-socket-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.319946 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.320312 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:23.820301266 +0000 UTC m=+167.511104077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.321023 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-csi-data-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.323837 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/077ab59f-d160-4842-ad20-f982d5447f5b-images\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.325061 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-mountpoint-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.327387 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1939f298-410c-4eff-a94b-8f0f95fb5093-config\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.328882 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/077ab59f-d160-4842-ad20-f982d5447f5b-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.329039 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/626c201e-8cfd-454f-b354-e74eebd622f6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.330266 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6882c5d6-656a-429c-a42e-0bb482e00220-profile-collector-cert\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.328777 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcafa6f4-ba55-465f-928c-71b3687abd21-config-volume\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.330391 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-plugins-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.330411 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-registration-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.333083 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ffb84f35-51cc-4424-afb6-4baed4de2542-socket-dir\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.335444 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-config-volume\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.347559 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-metrics-tls\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.347678 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f-cert\") pod \"ingress-canary-f66hg\" (UID: \"4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f\") " pod="openshift-ingress-canary/ingress-canary-f66hg" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.348027 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/077ab59f-d160-4842-ad20-f982d5447f5b-proxy-tls\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.348179 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/cca6cc42-3078-4920-9cea-9a69d9e03588-node-bootstrap-token\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.351844 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d26cbe03-d214-423e-a80c-ef75c798c3c1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.352372 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3f55132b-9e49-49fb-9043-aa56c455ea0f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-bhdsm\" (UID: \"3f55132b-9e49-49fb-9043-aa56c455ea0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.355107 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1939f298-410c-4eff-a94b-8f0f95fb5093-serving-cert\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.355540 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/cca6cc42-3078-4920-9cea-9a69d9e03588-certs\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.355564 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcafa6f4-ba55-465f-928c-71b3687abd21-secret-volume\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.355607 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/38a7129e-8ecc-45f5-a199-b01a3d03b961-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.355621 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlh7x\" (UniqueName: \"kubernetes.io/projected/38a7129e-8ecc-45f5-a199-b01a3d03b961-kube-api-access-vlh7x\") pod \"ingress-operator-5b745b69d9-2dkpb\" (UID: \"38a7129e-8ecc-45f5-a199-b01a3d03b961\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.356094 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/510981d7-43ee-415f-aada-96bdb8eef50e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-b4sg2\" (UID: \"510981d7-43ee-415f-aada-96bdb8eef50e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.356138 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/626c201e-8cfd-454f-b354-e74eebd622f6-proxy-tls\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.357642 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6882c5d6-656a-429c-a42e-0bb482e00220-srv-cert\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.361270 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d26cbe03-d214-423e-a80c-ef75c798c3c1-srv-cert\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.376141 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-bound-sa-token\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.400777 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bp7p\" (UniqueName: \"kubernetes.io/projected/209c9ded-078e-4147-8f4f-652dcc9be452-kube-api-access-2bp7p\") pod \"router-default-5444994796-btfgz\" (UID: \"209c9ded-078e-4147-8f4f-652dcc9be452\") " pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.411992 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pm7h\" (UniqueName: \"kubernetes.io/projected/8e1b2a41-1776-4907-b520-c7c941c17a54-kube-api-access-2pm7h\") pod \"marketplace-operator-79b997595-2b9vl\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.433677 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c75c10ff-a3b2-4617-b0cd-462f598ecc90-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7dstk\" (UID: \"c75c10ff-a3b2-4617-b0cd-462f598ecc90\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.435319 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.436102 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:23.936077905 +0000 UTC m=+167.626880706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.461127 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj8vb\" (UniqueName: \"kubernetes.io/projected/10f63a0f-b508-4232-ab29-f6e480875b78-kube-api-access-wj8vb\") pod \"cluster-image-registry-operator-dc59b4c8b-nzg8d\" (UID: \"10f63a0f-b508-4232-ab29-f6e480875b78\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.479597 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.485805 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.488895 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j76xw\" (UniqueName: \"kubernetes.io/projected/1939f298-410c-4eff-a94b-8f0f95fb5093-kube-api-access-j76xw\") pod \"service-ca-operator-777779d784-rrcgm\" (UID: \"1939f298-410c-4eff-a94b-8f0f95fb5093\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.493715 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.512373 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.513628 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zfr6\" (UniqueName: \"kubernetes.io/projected/3f55132b-9e49-49fb-9043-aa56c455ea0f-kube-api-access-5zfr6\") pod \"control-plane-machine-set-operator-78cbb6b69f-bhdsm\" (UID: \"3f55132b-9e49-49fb-9043-aa56c455ea0f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.535783 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc978\" (UniqueName: \"kubernetes.io/projected/077ab59f-d160-4842-ad20-f982d5447f5b-kube-api-access-vc978\") pod \"machine-config-operator-74547568cd-9599z\" (UID: \"077ab59f-d160-4842-ad20-f982d5447f5b\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.537224 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.537869 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.037852102 +0000 UTC m=+167.728654913 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.555160 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66ktq\" (UniqueName: \"kubernetes.io/projected/dcafa6f4-ba55-465f-928c-71b3687abd21-kube-api-access-66ktq\") pod \"collect-profiles-29537820-htwpm\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.572360 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctrkl\" (UniqueName: \"kubernetes.io/projected/8047f7f7-2e8a-4c76-b69b-ba6919c8ec48-kube-api-access-ctrkl\") pod \"dns-default-2hzpp\" (UID: \"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48\") " pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.578803 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.595338 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qfbf\" (UniqueName: \"kubernetes.io/projected/39aa72fe-d020-4e33-81bd-3b14cf9da392-kube-api-access-2qfbf\") pod \"migrator-59844c95c7-qlk7v\" (UID: \"39aa72fe-d020-4e33-81bd-3b14cf9da392\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.616513 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjllk\" (UniqueName: \"kubernetes.io/projected/ffb84f35-51cc-4424-afb6-4baed4de2542-kube-api-access-qjllk\") pod \"csi-hostpathplugin-smv9n\" (UID: \"ffb84f35-51cc-4424-afb6-4baed4de2542\") " pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.620443 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.634688 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbkh6\" (UniqueName: \"kubernetes.io/projected/cca6cc42-3078-4920-9cea-9a69d9e03588-kube-api-access-cbkh6\") pod \"machine-config-server-nf6hw\" (UID: \"cca6cc42-3078-4920-9cea-9a69d9e03588\") " pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.638539 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.639689 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.139674661 +0000 UTC m=+167.830477472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.647918 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5qvz8"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.648701 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.648692 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.652852 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4lxnc"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.657863 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.663496 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbrnw\" (UniqueName: \"kubernetes.io/projected/626c201e-8cfd-454f-b354-e74eebd622f6-kube-api-access-mbrnw\") pod \"machine-config-controller-84d6567774-thpx5\" (UID: \"626c201e-8cfd-454f-b354-e74eebd622f6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.668110 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.671300 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g92n"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.673145 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckn2b\" (UniqueName: \"kubernetes.io/projected/510981d7-43ee-415f-aada-96bdb8eef50e-kube-api-access-ckn2b\") pod \"multus-admission-controller-857f4d67dd-b4sg2\" (UID: \"510981d7-43ee-415f-aada-96bdb8eef50e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.674297 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.682270 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" Feb 28 09:03:23 crc kubenswrapper[4996]: W0228 09:03:23.689853 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod209c9ded_078e_4147_8f4f_652dcc9be452.slice/crio-bfe71ced941040725c8afdf303b389e073dae153dd86d5ea913e9f15943155e1 WatchSource:0}: Error finding container bfe71ced941040725c8afdf303b389e073dae153dd86d5ea913e9f15943155e1: Status 404 returned error can't find the container with id bfe71ced941040725c8afdf303b389e073dae153dd86d5ea913e9f15943155e1 Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.691053 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-nf6hw" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.693861 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85qjq\" (UniqueName: \"kubernetes.io/projected/4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f-kube-api-access-85qjq\") pod \"ingress-canary-f66hg\" (UID: \"4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f\") " pod="openshift-ingress-canary/ingress-canary-f66hg" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.711915 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjwzq\" (UniqueName: \"kubernetes.io/projected/6882c5d6-656a-429c-a42e-0bb482e00220-kube-api-access-jjwzq\") pod \"catalog-operator-68c6474976-nzcpl\" (UID: \"6882c5d6-656a-429c-a42e-0bb482e00220\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.724494 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-smv9n" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.724724 4996 generic.go:334] "Generic (PLEG): container finished" podID="372b0658-85b2-4b4d-b3ee-c4692ea9f21a" containerID="9a7dfa74db4dfcb62b630788a29de783fe23a3b012b8a1f002ebd7b59c8c25ca" exitCode=0 Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.724827 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" event={"ID":"372b0658-85b2-4b4d-b3ee-c4692ea9f21a","Type":"ContainerDied","Data":"9a7dfa74db4dfcb62b630788a29de783fe23a3b012b8a1f002ebd7b59c8c25ca"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.724858 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" event={"ID":"372b0658-85b2-4b4d-b3ee-c4692ea9f21a","Type":"ContainerStarted","Data":"d8cb19e4bb426e63d659f7dc87385ed03b4ada998bbfdae92e6175967d1bc0c3"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.739412 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.741712 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.742111 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.242095463 +0000 UTC m=+167.932898274 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.744665 4996 generic.go:334] "Generic (PLEG): container finished" podID="86bd2a67-a214-4aa9-afcd-8b93659acc07" containerID="cbccc4be039121d852b809269e278e7b61b31e1b0f2ce6593f5a1fc56328b889" exitCode=0 Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.744808 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" event={"ID":"86bd2a67-a214-4aa9-afcd-8b93659acc07","Type":"ContainerDied","Data":"cbccc4be039121d852b809269e278e7b61b31e1b0f2ce6593f5a1fc56328b889"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.747363 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" event={"ID":"84e53da0-501a-4dae-9a16-ef737205f6c3","Type":"ContainerStarted","Data":"d86f5860469cdd6ef3d71b73dc86470dfcf83e6a8c5ede557c2faadc174d97ad"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.750240 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.754575 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6dm4\" (UniqueName: \"kubernetes.io/projected/d26cbe03-d214-423e-a80c-ef75c798c3c1-kube-api-access-m6dm4\") pod \"olm-operator-6b444d44fb-c6v4k\" (UID: \"d26cbe03-d214-423e-a80c-ef75c798c3c1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.754590 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" event={"ID":"dc06c6a2-cc61-451c-b556-7cda3df46b14","Type":"ContainerStarted","Data":"4eb7d09da13488ed494f53515ec90ac0c0ad62c78afb8a95f9e9fb4cfaaf08a0"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.760539 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.767388 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" event={"ID":"633d3a0b-3505-49f8-a777-c785ec1d020b","Type":"ContainerStarted","Data":"389174dacb7a5ad47050fe562ac9ec1b7b84d959509a885421b8dea0e6bf30c5"} Feb 28 09:03:23 crc kubenswrapper[4996]: W0228 09:03:23.767502 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d5c455e_6954_4ad7_994d_a73049de9b62.slice/crio-a3609da8ef54d90cc61ba7383f5ecf4ed09cde4b873edb4271eb99a8644963b4 WatchSource:0}: Error finding container a3609da8ef54d90cc61ba7383f5ecf4ed09cde4b873edb4271eb99a8644963b4: Status 404 returned error can't find the container with id a3609da8ef54d90cc61ba7383f5ecf4ed09cde4b873edb4271eb99a8644963b4 Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.768614 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-f66hg" Feb 28 09:03:23 crc kubenswrapper[4996]: W0228 09:03:23.777223 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9f04f37_5418_49cc_9788_ce468f52375d.slice/crio-e698b9f534cac3356331f1205493ab9de09668c35effefa40b31dd959fd6b8df WatchSource:0}: Error finding container e698b9f534cac3356331f1205493ab9de09668c35effefa40b31dd959fd6b8df: Status 404 returned error can't find the container with id e698b9f534cac3356331f1205493ab9de09668c35effefa40b31dd959fd6b8df Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.777989 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-btfgz" event={"ID":"209c9ded-078e-4147-8f4f-652dcc9be452","Type":"ContainerStarted","Data":"bfe71ced941040725c8afdf303b389e073dae153dd86d5ea913e9f15943155e1"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.784799 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" event={"ID":"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328","Type":"ContainerStarted","Data":"8325bd646d739dcee194fc38fbe0f22720d8d073f8bfa90e07d7e5bcf1fb8e08"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.784861 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" event={"ID":"f83c2145-81d4-4d58-b3a8-1b5b0b0bf328","Type":"ContainerStarted","Data":"54a0ef419e88f6f3cd9033326ac62d3349e6f8a03a8981f6e84ba7683195a7e5"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.801769 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.826230 4996 generic.go:334] "Generic (PLEG): container finished" podID="49dadef9-e7d3-492d-859f-a97b88a10d02" containerID="d50fcfb528a73824cf36b9112a798b346e6f7208af0e588a17b79e4ee804f200" exitCode=0 Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.826848 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" event={"ID":"49dadef9-e7d3-492d-859f-a97b88a10d02","Type":"ContainerDied","Data":"d50fcfb528a73824cf36b9112a798b346e6f7208af0e588a17b79e4ee804f200"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.826894 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" event={"ID":"49dadef9-e7d3-492d-859f-a97b88a10d02","Type":"ContainerStarted","Data":"ac3094b92aa5f78c65e32c9a8487cb2c898da36d1641e3bf2c09ea236dc6aceb"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.851343 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.853217 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.353190871 +0000 UTC m=+168.043993672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.865895 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" event={"ID":"f51a22df-16fd-4f58-85dd-af4d0fc97752","Type":"ContainerStarted","Data":"2839879b41993f0bd140356ce4efb38dd144b8077a7e3ff4db43d8b1e7facf76"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.865956 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" event={"ID":"f51a22df-16fd-4f58-85dd-af4d0fc97752","Type":"ContainerStarted","Data":"3fce84d3dc808f88511836c9589f1186dea4bbe2e696efe4ed79d4cc0b26bfd6"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.869782 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" event={"ID":"23d763e1-10e0-4477-a48b-69d0f20032fb","Type":"ContainerStarted","Data":"396f995f6084179e6cf22f722c90a80f2d36693fea19bf3fbdcd9e36e5ba188d"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.869823 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" event={"ID":"23d763e1-10e0-4477-a48b-69d0f20032fb","Type":"ContainerStarted","Data":"8ce41df4dacdf4c9c2a7ac93b7fb36cecba2d3db9e4bbbf017f8314404d394a8"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.871430 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" event={"ID":"db230af9-fc19-435e-84f0-1751b4a23f15","Type":"ContainerStarted","Data":"15c25e94c05b019f62bc23a7ea5a48edb6754a0fa763791281131c0d746655df"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.871453 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" event={"ID":"db230af9-fc19-435e-84f0-1751b4a23f15","Type":"ContainerStarted","Data":"37a5f495696699461c5e1bda7c4daa41bfc3bfddac7c85076e384f690146a312"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.874344 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" event={"ID":"9507b95b-ce08-4e04-b3aa-6bb55279c631","Type":"ContainerStarted","Data":"859f83ae99f3f2a19ff90ef4c51fafbf04d878a5ce61274cbda0ebf037616610"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.874379 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" event={"ID":"9507b95b-ce08-4e04-b3aa-6bb55279c631","Type":"ContainerStarted","Data":"fddd57e54f1d239afe95e5a15f176bfd1928014016e6aabd5e7bfa91e361c95b"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.875195 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.877098 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4lxnc" event={"ID":"eb817855-81f3-4906-9fc8-6d4d02a8ca98","Type":"ContainerStarted","Data":"200a9a53262a810b42c4ca6c505e7f67736449b4e956e110ee702eed349cf34e"} Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.877564 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.893148 4996 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-w6cs4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.893193 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" podUID="9507b95b-ce08-4e04-b3aa-6bb55279c631" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.894104 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-plkrt"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.899076 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.925818 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.936547 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.950443 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-v49dx"] Feb 28 09:03:23 crc kubenswrapper[4996]: I0228 09:03:23.959672 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:23 crc kubenswrapper[4996]: E0228 09:03:23.961756 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.46174473 +0000 UTC m=+168.152547541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.001156 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.040606 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.060578 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.060942 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.560925215 +0000 UTC m=+168.251728016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.075808 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" podStartSLOduration=121.075790768 podStartE2EDuration="2m1.075790768s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:24.074801454 +0000 UTC m=+167.765604275" watchObservedRunningTime="2026-02-28 09:03:24.075790768 +0000 UTC m=+167.766593579" Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.124656 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-47v8f" podStartSLOduration=120.124639128 podStartE2EDuration="2m0.124639128s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:24.104610392 +0000 UTC m=+167.795413213" watchObservedRunningTime="2026-02-28 09:03:24.124639128 +0000 UTC m=+167.815441939" Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.136601 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-wt42s"] Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.163338 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.163613 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.663601143 +0000 UTC m=+168.354403954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.215332 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d"] Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.263702 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.264225 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.764211253 +0000 UTC m=+168.455014064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: W0228 09:03:24.334486 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10f63a0f_b508_4232_ab29_f6e480875b78.slice/crio-ac8d0205015a103ee2d9a2a333bd73f53811d6cc8cac6e3a40f073ae7deaef85 WatchSource:0}: Error finding container ac8d0205015a103ee2d9a2a333bd73f53811d6cc8cac6e3a40f073ae7deaef85: Status 404 returned error can't find the container with id ac8d0205015a103ee2d9a2a333bd73f53811d6cc8cac6e3a40f073ae7deaef85 Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.367244 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.367582 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.867569838 +0000 UTC m=+168.558372649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.456263 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm"] Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.470584 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.470887 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:24.97087192 +0000 UTC m=+168.661674731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.473422 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb"] Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.554656 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" podStartSLOduration=120.55463672 podStartE2EDuration="2m0.55463672s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:24.51462121 +0000 UTC m=+168.205424021" watchObservedRunningTime="2026-02-28 09:03:24.55463672 +0000 UTC m=+168.245439531" Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.572380 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.572775 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.07276342 +0000 UTC m=+168.763566231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.642205 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm"] Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.676215 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.676380 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.176358681 +0000 UTC m=+168.867161492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.676502 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.676816 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.176806901 +0000 UTC m=+168.867609712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.717575 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk"] Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.781382 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.781709 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.281691952 +0000 UTC m=+168.972494763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.790626 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2b9vl"] Feb 28 09:03:24 crc kubenswrapper[4996]: W0228 09:03:24.801377 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcafa6f4_ba55_465f_928c_71b3687abd21.slice/crio-791710bec75866d7b2b3389fe82cf587868bc77debcc29eefa833ae119e79cc2 WatchSource:0}: Error finding container 791710bec75866d7b2b3389fe82cf587868bc77debcc29eefa833ae119e79cc2: Status 404 returned error can't find the container with id 791710bec75866d7b2b3389fe82cf587868bc77debcc29eefa833ae119e79cc2 Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.849136 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nd447" podStartSLOduration=121.849117663 podStartE2EDuration="2m1.849117663s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:24.847810123 +0000 UTC m=+168.538612934" watchObservedRunningTime="2026-02-28 09:03:24.849117663 +0000 UTC m=+168.539920474" Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.883577 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:24 crc kubenswrapper[4996]: E0228 09:03:24.886245 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.386208035 +0000 UTC m=+169.077010846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.887326 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-xd5f6" podStartSLOduration=120.887308461 podStartE2EDuration="2m0.887308461s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:24.881869212 +0000 UTC m=+168.572672023" watchObservedRunningTime="2026-02-28 09:03:24.887308461 +0000 UTC m=+168.578111272" Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.922743 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" event={"ID":"e9f04f37-5418-49cc-9788-ce468f52375d","Type":"ContainerStarted","Data":"e698b9f534cac3356331f1205493ab9de09668c35effefa40b31dd959fd6b8df"} Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.925482 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4lxnc" event={"ID":"eb817855-81f3-4906-9fc8-6d4d02a8ca98","Type":"ContainerStarted","Data":"dc82bc0bff33dde5ca1de8ffaeb812461b76da494107bd664907035318694382"} Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.926434 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4lxnc" Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.933410 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" event={"ID":"79dfa3c4-7b0b-4e92-ad1a-99daa139c082","Type":"ContainerStarted","Data":"9d04b5cce8e2529a748b0d64fda997736897a49c7f1942e691c7f9020af0e4e0"} Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.934441 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" event={"ID":"dc06c6a2-cc61-451c-b556-7cda3df46b14","Type":"ContainerStarted","Data":"9775f774344c268f0093b5203ade5398e300f65f439766976bba9773ef8e60d6"} Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.968779 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-plkrt" event={"ID":"5a893a3e-768a-44b4-9ed6-86d318210be3","Type":"ContainerStarted","Data":"34af12d836f7fe4ea172dc414adbfc7647a26071500ab246f45a8d3884218c0c"} Feb 28 09:03:24 crc kubenswrapper[4996]: I0228 09:03:24.986735 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:24.987679 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" event={"ID":"2d5c455e-6954-4ad7-994d-a73049de9b62","Type":"ContainerStarted","Data":"a3609da8ef54d90cc61ba7383f5ecf4ed09cde4b873edb4271eb99a8644963b4"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:24.996741 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" event={"ID":"161bc0a4-0094-4627-9e43-ad727e5102b7","Type":"ContainerStarted","Data":"f6370f160ee2913988ad9056220d7e2955cd6e1a4df2ba68d5bf6cedb1225b73"} Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.010342 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.510311532 +0000 UTC m=+169.201114333 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.013287 4996 patch_prober.go:28] interesting pod/downloads-7954f5f757-4lxnc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.013356 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4lxnc" podUID="eb817855-81f3-4906-9fc8-6d4d02a8ca98" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.051516 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jd5qm" podStartSLOduration=121.05149749 podStartE2EDuration="2m1.05149749s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:25.013901407 +0000 UTC m=+168.704704218" watchObservedRunningTime="2026-02-28 09:03:25.05149749 +0000 UTC m=+168.742300291" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.087819 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.089282 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.589266947 +0000 UTC m=+169.280069748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.115193 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-btfgz" event={"ID":"209c9ded-078e-4147-8f4f-652dcc9be452","Type":"ContainerStarted","Data":"003172c328d9b79854f92227f8df51fe306950efb5183040be27a4298bfd506b"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.154949 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" event={"ID":"372b0658-85b2-4b4d-b3ee-c4692ea9f21a","Type":"ContainerStarted","Data":"6765fdda0b71bf35bf5a72527ef58352c93d7dfe18906c7bfb2bcd7d0d9ae423"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.158098 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.167254 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" event={"ID":"21bad419-219a-4957-9239-8b7583268ed1","Type":"ContainerStarted","Data":"eae81fc62a37d855d213b4d5c82ed46ffcb47a5092d5ea499e4f1ba251bee202"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.174321 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v49dx" event={"ID":"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43","Type":"ContainerStarted","Data":"ac20d3c13e605c12ac83c5695f6ec3cf75dabe0fb31c24290076a22e25392493"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.192682 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.195054 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.69503526 +0000 UTC m=+169.385838071 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.196408 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" event={"ID":"dcafa6f4-ba55-465f-928c-71b3687abd21","Type":"ContainerStarted","Data":"791710bec75866d7b2b3389fe82cf587868bc77debcc29eefa833ae119e79cc2"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.196780 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.197366 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.697353474 +0000 UTC m=+169.388156285 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.198937 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" event={"ID":"10f63a0f-b508-4232-ab29-f6e480875b78","Type":"ContainerStarted","Data":"ac8d0205015a103ee2d9a2a333bd73f53811d6cc8cac6e3a40f073ae7deaef85"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.245149 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" event={"ID":"e7dcb17a-35c9-45cd-a696-faa52e68849e","Type":"ContainerStarted","Data":"7344e184bd4360bab9073737171b1e1376f4c0b673ba5a9f8944656bbd899c67"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.245187 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" event={"ID":"e7dcb17a-35c9-45cd-a696-faa52e68849e","Type":"ContainerStarted","Data":"4f491c3787457d41750dbc3010dd8ca7eb840df741a915f5f8f6005cb76b5b9a"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.246058 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.250155 4996 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-6cwnw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.250210 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" podUID="e7dcb17a-35c9-45cd-a696-faa52e68849e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.278559 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" event={"ID":"9d7692c9-64ce-41eb-a54c-9217e614a670","Type":"ContainerStarted","Data":"6143c43a6821cf0c1a435c70992fb712ae1e72d6804e587f33c3efa45fd3ba56"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.278608 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" event={"ID":"9d7692c9-64ce-41eb-a54c-9217e614a670","Type":"ContainerStarted","Data":"a9660122dabb32e0e393ea336a25d5958088edc4c8f9d708f2ec3e44a6119980"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.292176 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" event={"ID":"633d3a0b-3505-49f8-a777-c785ec1d020b","Type":"ContainerStarted","Data":"0e0ca0fb5b49967986cee0414a5de67d533444034a3b9e379eeae19735f686d5"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.297093 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" event={"ID":"84e53da0-501a-4dae-9a16-ef737205f6c3","Type":"ContainerStarted","Data":"69f0a972527aab1546de3c9f03c53a68fc0283b5eab91a645848d7619ae00d5f"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.298414 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.299271 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.799225294 +0000 UTC m=+169.490028115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.304602 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" event={"ID":"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa","Type":"ContainerStarted","Data":"dba550ca676737fd1332d1845cf8312b4817de8b8adcf8719a24ed269d00fff3"} Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.317598 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.362601 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" podStartSLOduration=122.362581728 podStartE2EDuration="2m2.362581728s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:25.353331849 +0000 UTC m=+169.044134680" watchObservedRunningTime="2026-02-28 09:03:25.362581728 +0000 UTC m=+169.053384539" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.399983 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.400147 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rzg82" podStartSLOduration=121.40012845 podStartE2EDuration="2m1.40012845s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:25.398404609 +0000 UTC m=+169.089207420" watchObservedRunningTime="2026-02-28 09:03:25.40012845 +0000 UTC m=+169.090931261" Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.400286 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:25.900274613 +0000 UTC m=+169.591077424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.484878 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrqvg" podStartSLOduration=121.484826722 podStartE2EDuration="2m1.484826722s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:25.481316748 +0000 UTC m=+169.172119559" watchObservedRunningTime="2026-02-28 09:03:25.484826722 +0000 UTC m=+169.175629533" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.501492 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.502767 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.002750197 +0000 UTC m=+169.693553008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.518902 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" podStartSLOduration=121.51888583 podStartE2EDuration="2m1.51888583s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:25.517685212 +0000 UTC m=+169.208488023" watchObservedRunningTime="2026-02-28 09:03:25.51888583 +0000 UTC m=+169.209688661" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.519524 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.572840 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5qvz8" podStartSLOduration=122.572823791 podStartE2EDuration="2m2.572823791s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:25.571488439 +0000 UTC m=+169.262291250" watchObservedRunningTime="2026-02-28 09:03:25.572823791 +0000 UTC m=+169.263626602" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.573197 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4lxnc" podStartSLOduration=122.57319249 podStartE2EDuration="2m2.57319249s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:25.538440635 +0000 UTC m=+169.229243446" watchObservedRunningTime="2026-02-28 09:03:25.57319249 +0000 UTC m=+169.263995291" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.605308 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.605831 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.105820825 +0000 UTC m=+169.796623636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.640562 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-btfgz" podStartSLOduration=121.64054049 podStartE2EDuration="2m1.64054049s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:25.604224497 +0000 UTC m=+169.295027318" watchObservedRunningTime="2026-02-28 09:03:25.64054049 +0000 UTC m=+169.331343301" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.641356 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:25 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:25 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:25 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.641458 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.719617 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.720088 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.220072479 +0000 UTC m=+169.910875290 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.821300 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.822265 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.322242905 +0000 UTC m=+170.013045716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.902965 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-smv9n"] Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.926339 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:25 crc kubenswrapper[4996]: E0228 09:03:25.926859 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.426832799 +0000 UTC m=+170.117635610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.954571 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm"] Feb 28 09:03:25 crc kubenswrapper[4996]: I0228 09:03:25.964350 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5"] Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.029063 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.029444 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.529429835 +0000 UTC m=+170.220232636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.114086 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v"] Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.115832 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-f66hg"] Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.130686 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.131162 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.63112091 +0000 UTC m=+170.321923721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.158953 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9599z"] Feb 28 09:03:26 crc kubenswrapper[4996]: W0228 09:03:26.170651 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cc77c67_0aa7_44b1_94eb_fc6aab6e7b1f.slice/crio-11de4a6899ed42594c72c0c82758567eff145f986ffcf88a821059b8fa0e685a WatchSource:0}: Error finding container 11de4a6899ed42594c72c0c82758567eff145f986ffcf88a821059b8fa0e685a: Status 404 returned error can't find the container with id 11de4a6899ed42594c72c0c82758567eff145f986ffcf88a821059b8fa0e685a Feb 28 09:03:26 crc kubenswrapper[4996]: W0228 09:03:26.201164 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077ab59f_d160_4842_ad20_f982d5447f5b.slice/crio-839f97bfcb45ef6834ccc5a3a71443bf7f51f04f077b99c0fe6f6781019fa230 WatchSource:0}: Error finding container 839f97bfcb45ef6834ccc5a3a71443bf7f51f04f077b99c0fe6f6781019fa230: Status 404 returned error can't find the container with id 839f97bfcb45ef6834ccc5a3a71443bf7f51f04f077b99c0fe6f6781019fa230 Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.241216 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.241603 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.741586494 +0000 UTC m=+170.432389305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.243993 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k"] Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.320785 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl"] Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.339614 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2hzpp"] Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.342193 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.344096 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.844080238 +0000 UTC m=+170.534883049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.362193 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" event={"ID":"3f55132b-9e49-49fb-9043-aa56c455ea0f","Type":"ContainerStarted","Data":"704f3689f43767ac21f17f60295f62374f0bc78ebf493ba49359f986ca471a33"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.372946 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" event={"ID":"39aa72fe-d020-4e33-81bd-3b14cf9da392","Type":"ContainerStarted","Data":"2f604dff2fc8951215548af266e135ee69b5d73ff3335fa90f2a368feecd92ed"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.418876 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smv9n" event={"ID":"ffb84f35-51cc-4424-afb6-4baed4de2542","Type":"ContainerStarted","Data":"9a465214ab84dc97b4b1f2fde50228b337378be54d607c81e8a0c6524a13ba50"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.432402 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-b4sg2"] Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.443325 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.443612 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:26.943598171 +0000 UTC m=+170.634400982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.443738 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" event={"ID":"38a7129e-8ecc-45f5-a199-b01a3d03b961","Type":"ContainerStarted","Data":"e480ecc97572dcdd151fbc62b231195d9fe1000c0ad45459aa1f7d451b6c8ba0"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.443768 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" event={"ID":"38a7129e-8ecc-45f5-a199-b01a3d03b961","Type":"ContainerStarted","Data":"8e27c6b67ae6ab015a87b98529bc72985684ecd94e7181a5ea952f4f96a27bde"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.451102 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" event={"ID":"077ab59f-d160-4842-ad20-f982d5447f5b","Type":"ContainerStarted","Data":"839f97bfcb45ef6834ccc5a3a71443bf7f51f04f077b99c0fe6f6781019fa230"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.520201 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:26 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:26 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:26 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.520532 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.523125 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" event={"ID":"633d3a0b-3505-49f8-a777-c785ec1d020b","Type":"ContainerStarted","Data":"f06b1d4e66e530168a32d1255dc8d471b862b90186bc3f9e45fce79ead629c71"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.542631 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" event={"ID":"8e1b2a41-1776-4907-b520-c7c941c17a54","Type":"ContainerStarted","Data":"1c7fe2c3b06c10b82b0f7d62ff3e154a407aebd82b2b22099848ec29131bf7cf"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.544260 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.544657 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.044643181 +0000 UTC m=+170.735445992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.545674 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.550870 4996 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-4g92n container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.550920 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" podUID="2d5c455e-6954-4ad7-994d-a73049de9b62" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.551650 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" event={"ID":"c75c10ff-a3b2-4617-b0cd-462f598ecc90","Type":"ContainerStarted","Data":"a9f8db3232972fb5814b4a8960cb2b38febe565cb9c850f768d3eb4625d2265d"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.567363 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-v49dx" podStartSLOduration=123.567348461 podStartE2EDuration="2m3.567348461s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.567279169 +0000 UTC m=+170.258081980" watchObservedRunningTime="2026-02-28 09:03:26.567348461 +0000 UTC m=+170.258151272" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.620786 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" event={"ID":"626c201e-8cfd-454f-b354-e74eebd622f6","Type":"ContainerStarted","Data":"6ab6387c86affbbf5b2d02b433e1b4b86a471c43f2a92da25ab0c73bccf2ee7a"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.648659 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" event={"ID":"86bd2a67-a214-4aa9-afcd-8b93659acc07","Type":"ContainerStarted","Data":"2587b464be6901fb7879674688eb375ed89b665cac799bc486274fdb9cc3707a"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.650843 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.653026 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" podStartSLOduration=123.652987325 podStartE2EDuration="2m3.652987325s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.614916801 +0000 UTC m=+170.305719632" watchObservedRunningTime="2026-02-28 09:03:26.652987325 +0000 UTC m=+170.343790136" Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.653534 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.153515027 +0000 UTC m=+170.844317838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.663302 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" event={"ID":"e9f04f37-5418-49cc-9788-ce468f52375d","Type":"ContainerStarted","Data":"c502a0b3e1e63f14451f8731d966513947d3ed9aaec17e62ec97df66ab58e78b"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.693610 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" podStartSLOduration=123.693586749 podStartE2EDuration="2m3.693586749s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.693581689 +0000 UTC m=+170.384384500" watchObservedRunningTime="2026-02-28 09:03:26.693586749 +0000 UTC m=+170.384389560" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.694356 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-lmknx" podStartSLOduration=123.694349576 podStartE2EDuration="2m3.694349576s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.667600791 +0000 UTC m=+170.358403622" watchObservedRunningTime="2026-02-28 09:03:26.694349576 +0000 UTC m=+170.385152397" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.697760 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nf6hw" event={"ID":"cca6cc42-3078-4920-9cea-9a69d9e03588","Type":"ContainerStarted","Data":"ab741be54349fd83d21fc3cd3411c21ce7c9be2172757f8bbdb001d4a8b26f63"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.716926 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f66hg" event={"ID":"4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f","Type":"ContainerStarted","Data":"11de4a6899ed42594c72c0c82758567eff145f986ffcf88a821059b8fa0e685a"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.733475 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-plkrt" event={"ID":"5a893a3e-768a-44b4-9ed6-86d318210be3","Type":"ContainerStarted","Data":"bb2aae8ae6617c91b9899618703ddd719bd5184cebcb7d749e4c000328513c53"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.735628 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.736536 4996 patch_prober.go:28] interesting pod/console-operator-58897d9998-plkrt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.736566 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-plkrt" podUID="5a893a3e-768a-44b4-9ed6-86d318210be3" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.747100 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" event={"ID":"49dadef9-e7d3-492d-859f-a97b88a10d02","Type":"ContainerStarted","Data":"3fae87363c42d232cdab471ab2966d28aa0f968a7c866d1024c5d99b096821cb"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.748871 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-x7xr9" podStartSLOduration=122.748853851 podStartE2EDuration="2m2.748853851s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.746117886 +0000 UTC m=+170.436920697" watchObservedRunningTime="2026-02-28 09:03:26.748853851 +0000 UTC m=+170.439656662" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.751476 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.752448 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.252424716 +0000 UTC m=+170.943227527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.763135 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" event={"ID":"1939f298-410c-4eff-a94b-8f0f95fb5093","Type":"ContainerStarted","Data":"948cbc2f629bd646567476dd5bd217a0248d93c49547933d1e605d9882120764"} Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.764251 4996 patch_prober.go:28] interesting pod/downloads-7954f5f757-4lxnc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.764306 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4lxnc" podUID="eb817855-81f3-4906-9fc8-6d4d02a8ca98" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.808305 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-plkrt" podStartSLOduration=123.808287583 podStartE2EDuration="2m3.808287583s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.807733309 +0000 UTC m=+170.498536120" watchObservedRunningTime="2026-02-28 09:03:26.808287583 +0000 UTC m=+170.499090394" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.813921 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" podStartSLOduration=122.813905256 podStartE2EDuration="2m2.813905256s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.782700254 +0000 UTC m=+170.473503095" watchObservedRunningTime="2026-02-28 09:03:26.813905256 +0000 UTC m=+170.504708067" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.829706 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-nf6hw" podStartSLOduration=6.829684191 podStartE2EDuration="6.829684191s" podCreationTimestamp="2026-02-28 09:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.826310171 +0000 UTC m=+170.517113002" watchObservedRunningTime="2026-02-28 09:03:26.829684191 +0000 UTC m=+170.520487002" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.853429 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" podStartSLOduration=122.853411224 podStartE2EDuration="2m2.853411224s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.85154192 +0000 UTC m=+170.542344741" watchObservedRunningTime="2026-02-28 09:03:26.853411224 +0000 UTC m=+170.544214025" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.855715 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.870666 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.370642643 +0000 UTC m=+171.061445454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.876512 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" podStartSLOduration=122.876493802 podStartE2EDuration="2m2.876493802s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:26.875873148 +0000 UTC m=+170.566675959" watchObservedRunningTime="2026-02-28 09:03:26.876493802 +0000 UTC m=+170.567296613" Feb 28 09:03:26 crc kubenswrapper[4996]: I0228 09:03:26.961198 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:26 crc kubenswrapper[4996]: E0228 09:03:26.961672 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.461654055 +0000 UTC m=+171.152456866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.064761 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.074585 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-6cwnw" Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.075296 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.575273503 +0000 UTC m=+171.266076314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.167857 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.169363 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.669320657 +0000 UTC m=+171.360123468 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.270246 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.270484 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.770473719 +0000 UTC m=+171.461276530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.341923 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.342284 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.370947 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.371281 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.871264873 +0000 UTC m=+171.562067684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.473807 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.474400 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:27.974387002 +0000 UTC m=+171.665189803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.518596 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:27 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:27 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:27 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.518650 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.575204 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.575536 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.075521494 +0000 UTC m=+171.766324305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.683071 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.683719 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.183706933 +0000 UTC m=+171.874509744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.754819 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54696: no serving certificate available for the kubelet" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.788266 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.788769 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.288731588 +0000 UTC m=+171.979534409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.788908 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.789476 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.289465786 +0000 UTC m=+171.980268587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.803571 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-rrcgm" event={"ID":"1939f298-410c-4eff-a94b-8f0f95fb5093","Type":"ContainerStarted","Data":"d5b9f1dbf7aaad9378673ce61bc8d2dd0c5a4fd2f17936531a9a40748f12eef1"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.853745 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54710: no serving certificate available for the kubelet" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.869148 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" event={"ID":"21bad419-219a-4957-9239-8b7583268ed1","Type":"ContainerStarted","Data":"519862952e8ba54a5bb3b2f342d16fb18496f6ebc846e609dcbca79872c58286"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.869215 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" event={"ID":"21bad419-219a-4957-9239-8b7583268ed1","Type":"ContainerStarted","Data":"8f3ffda50a732d6cff95396a3d35ae5bc97e4f12bdaa4f74b6285ea38d762426"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.869280 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.891239 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.892964 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.392937112 +0000 UTC m=+172.083739923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.898069 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" podStartSLOduration=123.898044084 podStartE2EDuration="2m3.898044084s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:27.897325987 +0000 UTC m=+171.588128788" watchObservedRunningTime="2026-02-28 09:03:27.898044084 +0000 UTC m=+171.588846895" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.902606 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" event={"ID":"161bc0a4-0094-4627-9e43-ad727e5102b7","Type":"ContainerStarted","Data":"8a5c4ac1247d23f80596d8ce33329cf140514e33760dcb06492f10d5a489bab4"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.902726 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" event={"ID":"161bc0a4-0094-4627-9e43-ad727e5102b7","Type":"ContainerStarted","Data":"9da2c132212a8fed910092461002a338856837b6bb9b16d178c66b27a3984c15"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.930903 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-jrzvn" podStartSLOduration=124.930881914 podStartE2EDuration="2m4.930881914s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:27.928962808 +0000 UTC m=+171.619765619" watchObservedRunningTime="2026-02-28 09:03:27.930881914 +0000 UTC m=+171.621684725" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.952168 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" event={"ID":"86bd2a67-a214-4aa9-afcd-8b93659acc07","Type":"ContainerStarted","Data":"a88246bf66828d14e09aff8ece2128785fa166efb52be009a3c81de04b9aa29b"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.974368 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" event={"ID":"d26cbe03-d214-423e-a80c-ef75c798c3c1","Type":"ContainerStarted","Data":"d1828580f9dc95c7be1e1a8dcaf2432e0cbc3d402c9b0930c4d5c3750b379086"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.974433 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" event={"ID":"d26cbe03-d214-423e-a80c-ef75c798c3c1","Type":"ContainerStarted","Data":"98bdb88cb9c7873e5653498dedfcffc36cc8f31196f06578e47972551bdb6907"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.974978 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54724: no serving certificate available for the kubelet" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.975444 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.979118 4996 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-c6v4k container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.979164 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" podUID="d26cbe03-d214-423e-a80c-ef75c798c3c1" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.985699 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" podStartSLOduration=124.985672725 podStartE2EDuration="2m4.985672725s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:27.984337653 +0000 UTC m=+171.675140484" watchObservedRunningTime="2026-02-28 09:03:27.985672725 +0000 UTC m=+171.676475536" Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.988306 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-nf6hw" event={"ID":"cca6cc42-3078-4920-9cea-9a69d9e03588","Type":"ContainerStarted","Data":"3ffca14d5be976abc98e6412e56c37238a0e290f65be3c2e9ea8f17e85645d21"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.990344 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" event={"ID":"c75c10ff-a3b2-4617-b0cd-462f598ecc90","Type":"ContainerStarted","Data":"4211c9b0101dd83b534ae078a349b87f2ce5493a687ea03dc882f52b36bcb097"} Feb 28 09:03:27 crc kubenswrapper[4996]: I0228 09:03:27.993880 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:27 crc kubenswrapper[4996]: E0228 09:03:27.995092 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.495079358 +0000 UTC m=+172.185882169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.011533 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" event={"ID":"8e1b2a41-1776-4907-b520-c7c941c17a54","Type":"ContainerStarted","Data":"65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.012511 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.018757 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" event={"ID":"6882c5d6-656a-429c-a42e-0bb482e00220","Type":"ContainerStarted","Data":"1c1fc8bd38c59d45ad9f1ec986c2c02d4425bada2b60db0a187dc47999a6ee33"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.018819 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" event={"ID":"6882c5d6-656a-429c-a42e-0bb482e00220","Type":"ContainerStarted","Data":"c725e4d5ddeb2d88cd1d0ba2409946acf9b9a8277a83eb8ff1ac8dcb66323389"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.019244 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.021422 4996 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-2b9vl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.021465 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" podUID="8e1b2a41-1776-4907-b520-c7c941c17a54" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.29:8080/healthz\": dial tcp 10.217.0.29:8080: connect: connection refused" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.022516 4996 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nzcpl container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.022612 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" podUID="6882c5d6-656a-429c-a42e-0bb482e00220" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.030807 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" event={"ID":"79dfa3c4-7b0b-4e92-ad1a-99daa139c082","Type":"ContainerStarted","Data":"a0f57cd83948387b68e412de95446538781a0c7b716a2aeca58573095f0af281"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.031076 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" event={"ID":"79dfa3c4-7b0b-4e92-ad1a-99daa139c082","Type":"ContainerStarted","Data":"427c397c64e5fd84da750b66d0db4f4ab16465f62dc5fa8349143b38990d0603"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.030867 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" podStartSLOduration=124.030847098 podStartE2EDuration="2m4.030847098s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.02924467 +0000 UTC m=+171.720047481" watchObservedRunningTime="2026-02-28 09:03:28.030847098 +0000 UTC m=+171.721649909" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.040851 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" event={"ID":"dcafa6f4-ba55-465f-928c-71b3687abd21","Type":"ContainerStarted","Data":"1ab79257b4a147c3dea91ec3a14c5904d9e25d7a5cea7efcb49c99b4c632816c"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.048124 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" event={"ID":"2d5c455e-6954-4ad7-994d-a73049de9b62","Type":"ContainerStarted","Data":"22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.055452 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2hzpp" event={"ID":"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48","Type":"ContainerStarted","Data":"234911a08fe56aa0be58d96275a89cb65a68acacd52e11453f13c9fbba43f539"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.055555 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2hzpp" event={"ID":"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48","Type":"ContainerStarted","Data":"c028c8c461198d8926e746b4d84876489f1a4b9f777b7187769303a7f73fee2a"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.064415 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7dstk" podStartSLOduration=124.064387884 podStartE2EDuration="2m4.064387884s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.063592215 +0000 UTC m=+171.754395036" watchObservedRunningTime="2026-02-28 09:03:28.064387884 +0000 UTC m=+171.755190715" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.074834 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54726: no serving certificate available for the kubelet" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.092209 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p979b" event={"ID":"c4e7dd0a-ffe0-4b88-8f49-e20b5defb0aa","Type":"ContainerStarted","Data":"3a81c9a0565590e0b8784308a33c931022c6a12d75a28c055fc4ef7fabcf1759"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.096720 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.097153 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.597102771 +0000 UTC m=+172.287905602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.098082 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.098953 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.598938105 +0000 UTC m=+172.289740916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.107758 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" event={"ID":"39aa72fe-d020-4e33-81bd-3b14cf9da392","Type":"ContainerStarted","Data":"1a5039c19fc65e17f90377eb56acec242689d30705d0707e8a9f29f620bd7914"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.108155 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" event={"ID":"39aa72fe-d020-4e33-81bd-3b14cf9da392","Type":"ContainerStarted","Data":"db623f3601946da70f8e53472578a9f6560b653234a31b335c2f97f29e38ede0"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.164201 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v49dx" event={"ID":"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43","Type":"ContainerStarted","Data":"a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.176058 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54742: no serving certificate available for the kubelet" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.193273 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" podStartSLOduration=124.175996955 podStartE2EDuration="2m4.175996955s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.104340013 +0000 UTC m=+171.795142824" watchObservedRunningTime="2026-02-28 09:03:28.175996955 +0000 UTC m=+171.866799786" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.194814 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" event={"ID":"38a7129e-8ecc-45f5-a199-b01a3d03b961","Type":"ContainerStarted","Data":"f7fea3e744ac04a5f00d5d84fd748434be7edb55edf16185d67e8e77d4254bf2"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.200987 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.201991 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.701959591 +0000 UTC m=+172.392762402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.214174 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" event={"ID":"3f55132b-9e49-49fb-9043-aa56c455ea0f","Type":"ContainerStarted","Data":"a1afec33fa38b1eccda3faa5e742f535ca928efd3803cff8592cf39f5c6af3af"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.215118 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" podStartSLOduration=125.215104534 podStartE2EDuration="2m5.215104534s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.215086674 +0000 UTC m=+171.905889485" watchObservedRunningTime="2026-02-28 09:03:28.215104534 +0000 UTC m=+171.905907345" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.215919 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" podStartSLOduration=124.215911753 podStartE2EDuration="2m4.215911753s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.175989105 +0000 UTC m=+171.866791926" watchObservedRunningTime="2026-02-28 09:03:28.215911753 +0000 UTC m=+171.906714584" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.232621 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-nzg8d" event={"ID":"10f63a0f-b508-4232-ab29-f6e480875b78","Type":"ContainerStarted","Data":"5508cb26a0f2b47bda5532579048cd157c9043869a05e6a11001ace265740ac1"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.263465 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" event={"ID":"077ab59f-d160-4842-ad20-f982d5447f5b","Type":"ContainerStarted","Data":"efe93ecfa45ca8366978229aa1e84dcbda0d9fc8334c54630c864e5ed2ae5ef7"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.263516 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" event={"ID":"077ab59f-d160-4842-ad20-f982d5447f5b","Type":"ContainerStarted","Data":"45be35a551dfc2cc064a7be9f1d20a9cf6839b9163c2217eec672e16830fda0a"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.269901 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-wt42s" podStartSLOduration=124.269871325 podStartE2EDuration="2m4.269871325s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.267882557 +0000 UTC m=+171.958685388" watchObservedRunningTime="2026-02-28 09:03:28.269871325 +0000 UTC m=+171.960674136" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.299964 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-f66hg" event={"ID":"4cc77c67-0aa7-44b1-94eb-fc6aab6e7b1f","Type":"ContainerStarted","Data":"2b83728ed40a3ab36df4c5b8b2b37da5b13bcab626da86a300a6ea9e19ca53eb"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.304123 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.305191 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.805178893 +0000 UTC m=+172.495981704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.341220 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54754: no serving certificate available for the kubelet" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.342491 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" event={"ID":"626c201e-8cfd-454f-b354-e74eebd622f6","Type":"ContainerStarted","Data":"f2bb7483e8e0d1912ae9918f4e911df60a1de29107e69f385637ca27506fb882"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.342540 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" event={"ID":"626c201e-8cfd-454f-b354-e74eebd622f6","Type":"ContainerStarted","Data":"51c9f00be72653048434c89049578d6bcd35796b350e5f21b66605bf8fccec73"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.346063 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" event={"ID":"510981d7-43ee-415f-aada-96bdb8eef50e","Type":"ContainerStarted","Data":"857f620a53c1bfce620581710ec6f2f70aacc084561fd8daddc32ce6d41ea672"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.346099 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" event={"ID":"510981d7-43ee-415f-aada-96bdb8eef50e","Type":"ContainerStarted","Data":"3bd702565b83d47c21dfa619d5de68df5869385f4721dfbccc01655d6a9da6b1"} Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.367254 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.372459 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qlk7v" podStartSLOduration=124.37244529 podStartE2EDuration="2m4.37244529s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.322295079 +0000 UTC m=+172.013097880" watchObservedRunningTime="2026-02-28 09:03:28.37244529 +0000 UTC m=+172.063248101" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.373807 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-bhdsm" podStartSLOduration=124.373801943 podStartE2EDuration="2m4.373801943s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.358319055 +0000 UTC m=+172.049121856" watchObservedRunningTime="2026-02-28 09:03:28.373801943 +0000 UTC m=+172.064604754" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.407602 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.408245 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:28.9082284 +0000 UTC m=+172.599031211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.419772 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dkpb" podStartSLOduration=124.419753224 podStartE2EDuration="2m4.419753224s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.390342056 +0000 UTC m=+172.081144867" watchObservedRunningTime="2026-02-28 09:03:28.419753224 +0000 UTC m=+172.110556035" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.473571 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9599z" podStartSLOduration=124.473553222 podStartE2EDuration="2m4.473553222s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.473103981 +0000 UTC m=+172.163906812" watchObservedRunningTime="2026-02-28 09:03:28.473553222 +0000 UTC m=+172.164356033" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.474344 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-f66hg" podStartSLOduration=8.474339081 podStartE2EDuration="8.474339081s" podCreationTimestamp="2026-02-28 09:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.418957055 +0000 UTC m=+172.109759866" watchObservedRunningTime="2026-02-28 09:03:28.474339081 +0000 UTC m=+172.165141892" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.509587 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.512385 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.012372384 +0000 UTC m=+172.703175195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.522111 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:28 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:28 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:28 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.522197 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.559029 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" podStartSLOduration=124.558989301 podStartE2EDuration="2m4.558989301s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.528614749 +0000 UTC m=+172.219417560" watchObservedRunningTime="2026-02-28 09:03:28.558989301 +0000 UTC m=+172.249792112" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.579019 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54764: no serving certificate available for the kubelet" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.586076 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-thpx5" podStartSLOduration=124.586056733 podStartE2EDuration="2m4.586056733s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:28.582486889 +0000 UTC m=+172.273289700" watchObservedRunningTime="2026-02-28 09:03:28.586056733 +0000 UTC m=+172.276859544" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.610900 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.611544 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.111519179 +0000 UTC m=+172.802321990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.712839 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.713189 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.213176632 +0000 UTC m=+172.903979443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.792744 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-plkrt" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.813542 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.813682 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.313659909 +0000 UTC m=+173.004462720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.813835 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.814212 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.314195511 +0000 UTC m=+173.004998322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.823766 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-584vx" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.854179 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.914848 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.915066 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.415045826 +0000 UTC m=+173.105848637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.915517 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:28 crc kubenswrapper[4996]: E0228 09:03:28.915824 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.415811135 +0000 UTC m=+173.106613936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:28 crc kubenswrapper[4996]: I0228 09:03:28.969216 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54780: no serving certificate available for the kubelet" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.016479 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.016684 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.51665196 +0000 UTC m=+173.207454761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.016852 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.016990 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.017995 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.517985852 +0000 UTC m=+173.208788663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.024871 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/326e8318-b5b5-4d7b-a838-01d28808161b-metrics-certs\") pod \"network-metrics-daemon-9n7bm\" (UID: \"326e8318-b5b5-4d7b-a838-01d28808161b\") " pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.118317 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.118452 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.618432867 +0000 UTC m=+173.309235678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.118666 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.118986 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.61897771 +0000 UTC m=+173.309780521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.219506 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.219801 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.719786754 +0000 UTC m=+173.410589565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.286817 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9n7bm" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.322166 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.322560 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.822543094 +0000 UTC m=+173.513345905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.350557 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2hzpp" event={"ID":"8047f7f7-2e8a-4c76-b69b-ba6919c8ec48","Type":"ContainerStarted","Data":"ba923b77ac94edcece5288542540baa367cbd0d17b08aaf81e55cbe83eaced75"} Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.351311 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.356647 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smv9n" event={"ID":"ffb84f35-51cc-4424-afb6-4baed4de2542","Type":"ContainerStarted","Data":"6bbd06c87130f23a99e6a849bc1b8d614fc1239948237f462bc815d8022829ca"} Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.356684 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smv9n" event={"ID":"ffb84f35-51cc-4424-afb6-4baed4de2542","Type":"ContainerStarted","Data":"fc0c9421f1bba537f79666884c80a1fa19e6213a934ff5907a7c7250eb0bc422"} Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.363701 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-b4sg2" event={"ID":"510981d7-43ee-415f-aada-96bdb8eef50e","Type":"ContainerStarted","Data":"a9cd8c1af61a7fa47f15d992f9c9781266daa072c20ea06880f4d456313fadef"} Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.374892 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.380202 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nzcpl" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.381095 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-xhlhx" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.396267 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cqrkh"] Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.396571 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" podUID="4ea0533b-1941-4998-8989-13f7f962a294" containerName="controller-manager" containerID="cri-o://bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718" gracePeriod=30 Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.396898 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2hzpp" podStartSLOduration=9.39687954 podStartE2EDuration="9.39687954s" podCreationTimestamp="2026-02-28 09:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:29.38680447 +0000 UTC m=+173.077607281" watchObservedRunningTime="2026-02-28 09:03:29.39687954 +0000 UTC m=+173.087682351" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.425305 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.426170 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:29.926154705 +0000 UTC m=+173.616957516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.447535 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-c6v4k" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.518413 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4"] Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.518587 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" podUID="9507b95b-ce08-4e04-b3aa-6bb55279c631" containerName="route-controller-manager" containerID="cri-o://859f83ae99f3f2a19ff90ef4c51fafbf04d878a5ce61274cbda0ebf037616610" gracePeriod=30 Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.528213 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.535693 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:29 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:29 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:29 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.535749 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.541985 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:30.041970746 +0000 UTC m=+173.732773557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.630601 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.631061 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:30.131045131 +0000 UTC m=+173.821847942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.683489 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54796: no serving certificate available for the kubelet" Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.731946 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.732469 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:30.232457029 +0000 UTC m=+173.923259830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.834613 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.834961 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:30.334915393 +0000 UTC m=+174.025718204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:29 crc kubenswrapper[4996]: I0228 09:03:29.937712 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:29 crc kubenswrapper[4996]: E0228 09:03:29.945831 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:30.445791447 +0000 UTC m=+174.136594518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.005357 4996 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.030609 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9n7bm"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.038582 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:30 crc kubenswrapper[4996]: E0228 09:03:30.038912 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:30.538897227 +0000 UTC m=+174.229700028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.141158 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:30 crc kubenswrapper[4996]: E0228 09:03:30.141498 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 09:03:30.641475823 +0000 UTC m=+174.332278634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-k6wjx" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.212912 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2899"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.214573 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.219914 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.230211 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2899"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.243253 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:30 crc kubenswrapper[4996]: E0228 09:03:30.243883 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 09:03:30.743862185 +0000 UTC m=+174.434664996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.252819 4996 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-28T09:03:30.005392071Z","Handler":null,"Name":""} Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.272270 4996 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.272317 4996 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.294341 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.345798 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-catalog-content\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.345840 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpx56\" (UniqueName: \"kubernetes.io/projected/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-kube-api-access-wpx56\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.345878 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-utilities\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.345919 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.351863 4996 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.351900 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.370880 4996 generic.go:334] "Generic (PLEG): container finished" podID="4ea0533b-1941-4998-8989-13f7f962a294" containerID="bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718" exitCode=0 Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.370952 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" event={"ID":"4ea0533b-1941-4998-8989-13f7f962a294","Type":"ContainerDied","Data":"bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718"} Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.370979 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" event={"ID":"4ea0533b-1941-4998-8989-13f7f962a294","Type":"ContainerDied","Data":"b36e1e7cb8cec066179ad872d1d7906d3cf9ff8983fe29a24ef2ed1e33ae408e"} Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.370995 4996 scope.go:117] "RemoveContainer" containerID="bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.371121 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-cqrkh" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.379350 4996 generic.go:334] "Generic (PLEG): container finished" podID="9507b95b-ce08-4e04-b3aa-6bb55279c631" containerID="859f83ae99f3f2a19ff90ef4c51fafbf04d878a5ce61274cbda0ebf037616610" exitCode=0 Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.379449 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" event={"ID":"9507b95b-ce08-4e04-b3aa-6bb55279c631","Type":"ContainerDied","Data":"859f83ae99f3f2a19ff90ef4c51fafbf04d878a5ce61274cbda0ebf037616610"} Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.381370 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.382416 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smv9n" event={"ID":"ffb84f35-51cc-4424-afb6-4baed4de2542","Type":"ContainerStarted","Data":"ac28b160ab46a4b2f647ae15e395cca0702ddd92d6dc2993a108ef2096dc1092"} Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.395832 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" event={"ID":"326e8318-b5b5-4d7b-a838-01d28808161b","Type":"ContainerStarted","Data":"991bab02fb985abdf4982ab1a6990067a0bd56efe5e0dd96928e05a8c9ecb1bf"} Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.404631 4996 scope.go:117] "RemoveContainer" containerID="bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718" Feb 28 09:03:30 crc kubenswrapper[4996]: E0228 09:03:30.405061 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718\": container with ID starting with bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718 not found: ID does not exist" containerID="bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.405104 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718"} err="failed to get container status \"bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718\": rpc error: code = NotFound desc = could not find container \"bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718\": container with ID starting with bc2f50c8b7202e2bd01702a81eafcf788c3307756f37cae9c0b85f08b2916718 not found: ID does not exist" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.417841 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f5szk"] Feb 28 09:03:30 crc kubenswrapper[4996]: E0228 09:03:30.418122 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9507b95b-ce08-4e04-b3aa-6bb55279c631" containerName="route-controller-manager" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.418133 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9507b95b-ce08-4e04-b3aa-6bb55279c631" containerName="route-controller-manager" Feb 28 09:03:30 crc kubenswrapper[4996]: E0228 09:03:30.418148 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ea0533b-1941-4998-8989-13f7f962a294" containerName="controller-manager" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.418155 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea0533b-1941-4998-8989-13f7f962a294" containerName="controller-manager" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.418253 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9507b95b-ce08-4e04-b3aa-6bb55279c631" containerName="route-controller-manager" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.418280 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ea0533b-1941-4998-8989-13f7f962a294" containerName="controller-manager" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.418973 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.423417 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.439833 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5szk"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446430 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea0533b-1941-4998-8989-13f7f962a294-serving-cert\") pod \"4ea0533b-1941-4998-8989-13f7f962a294\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446496 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-config\") pod \"9507b95b-ce08-4e04-b3aa-6bb55279c631\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446527 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wznwb\" (UniqueName: \"kubernetes.io/projected/9507b95b-ce08-4e04-b3aa-6bb55279c631-kube-api-access-wznwb\") pod \"9507b95b-ce08-4e04-b3aa-6bb55279c631\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446564 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-client-ca\") pod \"4ea0533b-1941-4998-8989-13f7f962a294\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446603 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-proxy-ca-bundles\") pod \"4ea0533b-1941-4998-8989-13f7f962a294\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446719 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9507b95b-ce08-4e04-b3aa-6bb55279c631-serving-cert\") pod \"9507b95b-ce08-4e04-b3aa-6bb55279c631\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446752 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-client-ca\") pod \"9507b95b-ce08-4e04-b3aa-6bb55279c631\" (UID: \"9507b95b-ce08-4e04-b3aa-6bb55279c631\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446809 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dl6r\" (UniqueName: \"kubernetes.io/projected/4ea0533b-1941-4998-8989-13f7f962a294-kube-api-access-8dl6r\") pod \"4ea0533b-1941-4998-8989-13f7f962a294\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.446857 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-config\") pod \"4ea0533b-1941-4998-8989-13f7f962a294\" (UID: \"4ea0533b-1941-4998-8989-13f7f962a294\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.447043 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-utilities\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.447167 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-catalog-content\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.447204 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpx56\" (UniqueName: \"kubernetes.io/projected/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-kube-api-access-wpx56\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.447465 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-config" (OuterVolumeSpecName: "config") pod "9507b95b-ce08-4e04-b3aa-6bb55279c631" (UID: "9507b95b-ce08-4e04-b3aa-6bb55279c631"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.450967 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ea0533b-1941-4998-8989-13f7f962a294" (UID: "4ea0533b-1941-4998-8989-13f7f962a294"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.451856 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-utilities\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.452428 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-config" (OuterVolumeSpecName: "config") pod "4ea0533b-1941-4998-8989-13f7f962a294" (UID: "4ea0533b-1941-4998-8989-13f7f962a294"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.452809 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-client-ca" (OuterVolumeSpecName: "client-ca") pod "9507b95b-ce08-4e04-b3aa-6bb55279c631" (UID: "9507b95b-ce08-4e04-b3aa-6bb55279c631"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.453051 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-catalog-content\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.453393 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4ea0533b-1941-4998-8989-13f7f962a294" (UID: "4ea0533b-1941-4998-8989-13f7f962a294"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.463719 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9507b95b-ce08-4e04-b3aa-6bb55279c631-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9507b95b-ce08-4e04-b3aa-6bb55279c631" (UID: "9507b95b-ce08-4e04-b3aa-6bb55279c631"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.464327 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea0533b-1941-4998-8989-13f7f962a294-kube-api-access-8dl6r" (OuterVolumeSpecName: "kube-api-access-8dl6r") pod "4ea0533b-1941-4998-8989-13f7f962a294" (UID: "4ea0533b-1941-4998-8989-13f7f962a294"). InnerVolumeSpecName "kube-api-access-8dl6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.464442 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9507b95b-ce08-4e04-b3aa-6bb55279c631-kube-api-access-wznwb" (OuterVolumeSpecName: "kube-api-access-wznwb") pod "9507b95b-ce08-4e04-b3aa-6bb55279c631" (UID: "9507b95b-ce08-4e04-b3aa-6bb55279c631"). InnerVolumeSpecName "kube-api-access-wznwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.472468 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ea0533b-1941-4998-8989-13f7f962a294-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ea0533b-1941-4998-8989-13f7f962a294" (UID: "4ea0533b-1941-4998-8989-13f7f962a294"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.498769 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpx56\" (UniqueName: \"kubernetes.io/projected/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-kube-api-access-wpx56\") pod \"community-operators-w2899\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.508232 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.508813 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.518889 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.519131 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.523504 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.524397 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:30 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:30 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:30 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.524437 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.540470 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-k6wjx\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.551536 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.551916 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj748\" (UniqueName: \"kubernetes.io/projected/33a7d489-df52-4b28-90f9-9135da43486f-kube-api-access-sj748\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552115 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-utilities\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552172 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-catalog-content\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552254 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552271 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ea0533b-1941-4998-8989-13f7f962a294-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552281 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552291 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wznwb\" (UniqueName: \"kubernetes.io/projected/9507b95b-ce08-4e04-b3aa-6bb55279c631-kube-api-access-wznwb\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552300 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552309 4996 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ea0533b-1941-4998-8989-13f7f962a294-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552317 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9507b95b-ce08-4e04-b3aa-6bb55279c631-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552325 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9507b95b-ce08-4e04-b3aa-6bb55279c631-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.552333 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dl6r\" (UniqueName: \"kubernetes.io/projected/4ea0533b-1941-4998-8989-13f7f962a294-kube-api-access-8dl6r\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.568352 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2899" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.578825 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.615486 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8gkdr"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.617162 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.620983 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gkdr"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.653526 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd50a361-94fd-49a2-b882-684982d99c45-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd50a361-94fd-49a2-b882-684982d99c45\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.653614 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj748\" (UniqueName: \"kubernetes.io/projected/33a7d489-df52-4b28-90f9-9135da43486f-kube-api-access-sj748\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.653648 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd50a361-94fd-49a2-b882-684982d99c45-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd50a361-94fd-49a2-b882-684982d99c45\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.653677 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-utilities\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.653707 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-catalog-content\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.654095 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-catalog-content\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.654557 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-utilities\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.684376 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj748\" (UniqueName: \"kubernetes.io/projected/33a7d489-df52-4b28-90f9-9135da43486f-kube-api-access-sj748\") pod \"certified-operators-f5szk\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.703303 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.725863 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cqrkh"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.731547 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-cqrkh"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.744346 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.755139 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-utilities\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.755228 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd50a361-94fd-49a2-b882-684982d99c45-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd50a361-94fd-49a2-b882-684982d99c45\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.755297 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzqfh\" (UniqueName: \"kubernetes.io/projected/f16f2b07-0150-49db-af38-b617e3567070-kube-api-access-fzqfh\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.755340 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd50a361-94fd-49a2-b882-684982d99c45-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd50a361-94fd-49a2-b882-684982d99c45\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.755386 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-catalog-content\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.755485 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd50a361-94fd-49a2-b882-684982d99c45-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fd50a361-94fd-49a2-b882-684982d99c45\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.777531 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd50a361-94fd-49a2-b882-684982d99c45-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fd50a361-94fd-49a2-b882-684982d99c45\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.813432 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b5j29"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.814989 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.835928 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5j29"] Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.838967 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.860491 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-utilities\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.861113 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzqfh\" (UniqueName: \"kubernetes.io/projected/f16f2b07-0150-49db-af38-b617e3567070-kube-api-access-fzqfh\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.861165 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-catalog-content\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.861585 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-catalog-content\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.861811 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-utilities\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.881857 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzqfh\" (UniqueName: \"kubernetes.io/projected/f16f2b07-0150-49db-af38-b617e3567070-kube-api-access-fzqfh\") pod \"community-operators-8gkdr\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.941698 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.962534 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx77m\" (UniqueName: \"kubernetes.io/projected/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-kube-api-access-jx77m\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.962635 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-utilities\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.962671 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-catalog-content\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:30 crc kubenswrapper[4996]: I0228 09:03:30.997474 4996 ???:1] "http: TLS handshake error from 192.168.126.11:54802: no serving certificate available for the kubelet" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.004928 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k6wjx"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.053398 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea0533b-1941-4998-8989-13f7f962a294" path="/var/lib/kubelet/pods/4ea0533b-1941-4998-8989-13f7f962a294/volumes" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.054251 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.063943 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-utilities\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.064022 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-catalog-content\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.064088 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx77m\" (UniqueName: \"kubernetes.io/projected/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-kube-api-access-jx77m\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.064779 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-utilities\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.064984 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-catalog-content\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.096384 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2899"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.109524 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx77m\" (UniqueName: \"kubernetes.io/projected/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-kube-api-access-jx77m\") pod \"certified-operators-b5j29\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:31 crc kubenswrapper[4996]: W0228 09:03:31.109991 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb13a5c4e_ce50_4a84_8ef1_e63d18dfd06a.slice/crio-9915957ef916b4f832a47bd6c915bca5d96e839bd61c9278b20793e6ec690ab6 WatchSource:0}: Error finding container 9915957ef916b4f832a47bd6c915bca5d96e839bd61c9278b20793e6ec690ab6: Status 404 returned error can't find the container with id 9915957ef916b4f832a47bd6c915bca5d96e839bd61c9278b20793e6ec690ab6 Feb 28 09:03:31 crc kubenswrapper[4996]: W0228 09:03:31.133612 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33a7d489_df52_4b28_90f9_9135da43486f.slice/crio-5cf980efbcd8368f398c3990b1bbff9811d38a0094ce869c46769b8f604e2bdf WatchSource:0}: Error finding container 5cf980efbcd8368f398c3990b1bbff9811d38a0094ce869c46769b8f604e2bdf: Status 404 returned error can't find the container with id 5cf980efbcd8368f398c3990b1bbff9811d38a0094ce869c46769b8f604e2bdf Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.139194 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f5szk"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.148204 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.149220 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.151654 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.151862 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.152234 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.156259 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.203213 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.269402 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.270048 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.371050 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.371114 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.371241 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.394529 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.439666 4996 generic.go:334] "Generic (PLEG): container finished" podID="33a7d489-df52-4b28-90f9-9135da43486f" containerID="1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f" exitCode=0 Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.439735 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5szk" event={"ID":"33a7d489-df52-4b28-90f9-9135da43486f","Type":"ContainerDied","Data":"1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.439761 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5szk" event={"ID":"33a7d489-df52-4b28-90f9-9135da43486f","Type":"ContainerStarted","Data":"5cf980efbcd8368f398c3990b1bbff9811d38a0094ce869c46769b8f604e2bdf"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.449868 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5j29"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.450747 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" event={"ID":"a17bc456-8bc4-464f-a3d4-3d9ac9985870","Type":"ContainerStarted","Data":"4d9af0f0514566a0763458868b6b1bab56e18c147abefee67dbe88ac34190053"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.450800 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" event={"ID":"a17bc456-8bc4-464f-a3d4-3d9ac9985870","Type":"ContainerStarted","Data":"5b978bf961f563345b577001256c8c05b9db261b01669fc9eea341a60e55f68f"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.451517 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.453885 4996 generic.go:334] "Generic (PLEG): container finished" podID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerID="f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b" exitCode=0 Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.453969 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2899" event={"ID":"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a","Type":"ContainerDied","Data":"f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.454093 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2899" event={"ID":"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a","Type":"ContainerStarted","Data":"9915957ef916b4f832a47bd6c915bca5d96e839bd61c9278b20793e6ec690ab6"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.455349 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.464246 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.464276 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4" event={"ID":"9507b95b-ce08-4e04-b3aa-6bb55279c631","Type":"ContainerDied","Data":"fddd57e54f1d239afe95e5a15f176bfd1928014016e6aabd5e7bfa91e361c95b"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.464529 4996 scope.go:117] "RemoveContainer" containerID="859f83ae99f3f2a19ff90ef4c51fafbf04d878a5ce61274cbda0ebf037616610" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.468414 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-smv9n" event={"ID":"ffb84f35-51cc-4424-afb6-4baed4de2542","Type":"ContainerStarted","Data":"203761385d20acd06fb29c186fe849e04ef86b6054ed234aed092e80904d3343"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.474423 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" event={"ID":"326e8318-b5b5-4d7b-a838-01d28808161b","Type":"ContainerStarted","Data":"4df55a0f9916bf6f42d862f8372d53e25c6ec3bd7bd5b8a686e222495f78e029"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.474481 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9n7bm" event={"ID":"326e8318-b5b5-4d7b-a838-01d28808161b","Type":"ContainerStarted","Data":"7f5a0a6989549d392d6473faa8cee0d913f7474ed6447e191c880e1d98d1d42b"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.476041 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fd50a361-94fd-49a2-b882-684982d99c45","Type":"ContainerStarted","Data":"a699f33ac2086ebee1772cbdc901030357e67ed99610cebc44d2eada26e8dcf7"} Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.498859 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" podStartSLOduration=127.49882035 podStartE2EDuration="2m7.49882035s" podCreationTimestamp="2026-02-28 09:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:31.494134548 +0000 UTC m=+175.184937359" watchObservedRunningTime="2026-02-28 09:03:31.49882035 +0000 UTC m=+175.189623161" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.513120 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-9n7bm" podStartSLOduration=128.513088838 podStartE2EDuration="2m8.513088838s" podCreationTimestamp="2026-02-28 09:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:31.511633254 +0000 UTC m=+175.202436075" watchObservedRunningTime="2026-02-28 09:03:31.513088838 +0000 UTC m=+175.203891649" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.516574 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:31 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:31 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:31 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.516632 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.535673 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-smv9n" podStartSLOduration=11.535649594 podStartE2EDuration="11.535649594s" podCreationTimestamp="2026-02-28 09:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:31.532621302 +0000 UTC m=+175.223424113" watchObservedRunningTime="2026-02-28 09:03:31.535649594 +0000 UTC m=+175.226452405" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.544220 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.544445 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.554633 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-w6cs4"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.568446 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gkdr"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.820244 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.822113 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.822432 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7c4cd55d74-s2542"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.823221 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.824437 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.826203 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.826444 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.826663 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.827184 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.827471 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.827919 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.828063 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.828159 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.828246 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.828691 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.831119 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.838663 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.854998 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c4cd55d74-s2542"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.868956 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.890643 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-config\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.890961 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-client-ca\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.891181 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls9zr\" (UniqueName: \"kubernetes.io/projected/fc6d67fa-2340-4f11-bbbe-00edb743d45f-kube-api-access-ls9zr\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.891230 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-config\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.891315 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-client-ca\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.891430 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-proxy-ca-bundles\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.891474 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6d67fa-2340-4f11-bbbe-00edb743d45f-serving-cert\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.914374 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f"] Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.915729 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a869b0d-fae2-4507-9d38-acb8109ef67c-serving-cert\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:31 crc kubenswrapper[4996]: I0228 09:03:31.915809 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/7a869b0d-fae2-4507-9d38-acb8109ef67c-kube-api-access-zjgvm\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.016884 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls9zr\" (UniqueName: \"kubernetes.io/projected/fc6d67fa-2340-4f11-bbbe-00edb743d45f-kube-api-access-ls9zr\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.016952 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-config\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.016993 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-client-ca\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.017047 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-proxy-ca-bundles\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.017065 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6d67fa-2340-4f11-bbbe-00edb743d45f-serving-cert\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.017089 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a869b0d-fae2-4507-9d38-acb8109ef67c-serving-cert\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.017110 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/7a869b0d-fae2-4507-9d38-acb8109ef67c-kube-api-access-zjgvm\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.017134 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-config\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.017153 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-client-ca\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.018115 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-client-ca\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.019312 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-config\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.019533 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-config\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.020386 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-proxy-ca-bundles\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.021178 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-client-ca\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.034027 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a869b0d-fae2-4507-9d38-acb8109ef67c-serving-cert\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.034235 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6d67fa-2340-4f11-bbbe-00edb743d45f-serving-cert\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.039951 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls9zr\" (UniqueName: \"kubernetes.io/projected/fc6d67fa-2340-4f11-bbbe-00edb743d45f-kube-api-access-ls9zr\") pod \"route-controller-manager-84fc6f6768-5bb2f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.043633 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/7a869b0d-fae2-4507-9d38-acb8109ef67c-kube-api-access-zjgvm\") pod \"controller-manager-7c4cd55d74-s2542\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.173236 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.180095 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.211814 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lpd48"] Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.213850 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.217377 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.222320 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpd48"] Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.304293 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.305632 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.316606 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.322646 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-utilities\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.322696 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv4vs\" (UniqueName: \"kubernetes.io/projected/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-kube-api-access-bv4vs\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.322833 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-catalog-content\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.426480 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-utilities\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.425092 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-utilities\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.428435 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv4vs\" (UniqueName: \"kubernetes.io/projected/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-kube-api-access-bv4vs\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.430346 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-catalog-content\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.430487 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-catalog-content\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.431736 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f"] Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.447022 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv4vs\" (UniqueName: \"kubernetes.io/projected/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-kube-api-access-bv4vs\") pod \"redhat-marketplace-lpd48\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: W0228 09:03:32.459536 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc6d67fa_2340_4f11_bbbe_00edb743d45f.slice/crio-0d5862b73ae29654b707539ae79d0f5409dd123d40b15eb8bcc34763cd8ea605 WatchSource:0}: Error finding container 0d5862b73ae29654b707539ae79d0f5409dd123d40b15eb8bcc34763cd8ea605: Status 404 returned error can't find the container with id 0d5862b73ae29654b707539ae79d0f5409dd123d40b15eb8bcc34763cd8ea605 Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.487257 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7c4cd55d74-s2542"] Feb 28 09:03:32 crc kubenswrapper[4996]: W0228 09:03:32.505086 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a869b0d_fae2_4507_9d38_acb8109ef67c.slice/crio-088507f7a638e05806f3a0439778dec60646e88d67d3448839f42b29cd97654f WatchSource:0}: Error finding container 088507f7a638e05806f3a0439778dec60646e88d67d3448839f42b29cd97654f: Status 404 returned error can't find the container with id 088507f7a638e05806f3a0439778dec60646e88d67d3448839f42b29cd97654f Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.510926 4996 generic.go:334] "Generic (PLEG): container finished" podID="f16f2b07-0150-49db-af38-b617e3567070" containerID="3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b" exitCode=0 Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.510976 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gkdr" event={"ID":"f16f2b07-0150-49db-af38-b617e3567070","Type":"ContainerDied","Data":"3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.511031 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gkdr" event={"ID":"f16f2b07-0150-49db-af38-b617e3567070","Type":"ContainerStarted","Data":"dd5a067ed1162a191c8cfe818e6e65de804d205a659eb8eee2dde8e1e802532a"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.515808 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:32 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:32 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:32 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.515847 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.530723 4996 generic.go:334] "Generic (PLEG): container finished" podID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerID="00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc" exitCode=0 Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.530816 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5j29" event={"ID":"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7","Type":"ContainerDied","Data":"00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.530844 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5j29" event={"ID":"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7","Type":"ContainerStarted","Data":"e8c52edaaefd7dba6ec9b9f1c519a30791e7a7f07a0924d4b1971cfd7efe4f0f"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.539401 4996 generic.go:334] "Generic (PLEG): container finished" podID="fd50a361-94fd-49a2-b882-684982d99c45" containerID="e08d8b39130659f6466a42672f6c2cd6db30d61503349d1f749d3fc3ae799136" exitCode=0 Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.539531 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fd50a361-94fd-49a2-b882-684982d99c45","Type":"ContainerDied","Data":"e08d8b39130659f6466a42672f6c2cd6db30d61503349d1f749d3fc3ae799136"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.543078 4996 generic.go:334] "Generic (PLEG): container finished" podID="dcafa6f4-ba55-465f-928c-71b3687abd21" containerID="1ab79257b4a147c3dea91ec3a14c5904d9e25d7a5cea7efcb49c99b4c632816c" exitCode=0 Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.543179 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" event={"ID":"dcafa6f4-ba55-465f-928c-71b3687abd21","Type":"ContainerDied","Data":"1ab79257b4a147c3dea91ec3a14c5904d9e25d7a5cea7efcb49c99b4c632816c"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.548877 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" event={"ID":"fc6d67fa-2340-4f11-bbbe-00edb743d45f","Type":"ContainerStarted","Data":"0d5862b73ae29654b707539ae79d0f5409dd123d40b15eb8bcc34763cd8ea605"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.557209 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"434c5b33-266d-4651-8fe2-a6cd0ac816c3","Type":"ContainerStarted","Data":"daf80c8ce51cc955349aa61d0cfe45973ddb3c5ddbbba3c8404c7e4e51d08133"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.557287 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"434c5b33-266d-4651-8fe2-a6cd0ac816c3","Type":"ContainerStarted","Data":"12b5b847fc3e978df8398a738108981d04da8beca64283d82ef8b83044f53dda"} Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.563607 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.565569 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hpmk6" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.585242 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.585222841 podStartE2EDuration="1.585222841s" podCreationTimestamp="2026-02-28 09:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:32.577553578 +0000 UTC m=+176.268356399" watchObservedRunningTime="2026-02-28 09:03:32.585222841 +0000 UTC m=+176.276025652" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.610522 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r6zg5"] Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.613180 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.628696 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6zg5"] Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.740961 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-catalog-content\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.741289 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-utilities\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.741331 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75b78\" (UniqueName: \"kubernetes.io/projected/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-kube-api-access-75b78\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.844223 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-catalog-content\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.844287 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-utilities\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.844333 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75b78\" (UniqueName: \"kubernetes.io/projected/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-kube-api-access-75b78\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.845588 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-catalog-content\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.845845 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-utilities\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.900537 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75b78\" (UniqueName: \"kubernetes.io/projected/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-kube-api-access-75b78\") pod \"redhat-marketplace-r6zg5\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:32 crc kubenswrapper[4996]: I0228 09:03:32.963456 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.057653 4996 patch_prober.go:28] interesting pod/downloads-7954f5f757-4lxnc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.058136 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4lxnc" podUID="eb817855-81f3-4906-9fc8-6d4d02a8ca98" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.057700 4996 patch_prober.go:28] interesting pod/downloads-7954f5f757-4lxnc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.058219 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4lxnc" podUID="eb817855-81f3-4906-9fc8-6d4d02a8ca98" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.072082 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9507b95b-ce08-4e04-b3aa-6bb55279c631" path="/var/lib/kubelet/pods/9507b95b-ce08-4e04-b3aa-6bb55279c631/volumes" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.160403 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.160441 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.170883 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpd48"] Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.172145 4996 patch_prober.go:28] interesting pod/console-f9d7485db-v49dx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.172203 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-v49dx" podUID="ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.513483 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.532563 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:33 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:33 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:33 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.532621 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.587043 4996 ???:1] "http: TLS handshake error from 192.168.126.11:51030: no serving certificate available for the kubelet" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.615108 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ltn9t"] Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.616808 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.619553 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.619899 4996 generic.go:334] "Generic (PLEG): container finished" podID="434c5b33-266d-4651-8fe2-a6cd0ac816c3" containerID="daf80c8ce51cc955349aa61d0cfe45973ddb3c5ddbbba3c8404c7e4e51d08133" exitCode=0 Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.619994 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"434c5b33-266d-4651-8fe2-a6cd0ac816c3","Type":"ContainerDied","Data":"daf80c8ce51cc955349aa61d0cfe45973ddb3c5ddbbba3c8404c7e4e51d08133"} Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.631068 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltn9t"] Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.648739 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6zg5"] Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.655480 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" event={"ID":"7a869b0d-fae2-4507-9d38-acb8109ef67c","Type":"ContainerStarted","Data":"13465e38b00f1363b1d524364b315248524fa07c257399a24c2dd5ae162211db"} Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.655553 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" event={"ID":"7a869b0d-fae2-4507-9d38-acb8109ef67c","Type":"ContainerStarted","Data":"088507f7a638e05806f3a0439778dec60646e88d67d3448839f42b29cd97654f"} Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.657168 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.661822 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-catalog-content\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.661926 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-utilities\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.661961 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqxz\" (UniqueName: \"kubernetes.io/projected/50d13816-0091-4325-88fd-acac1435d7ea-kube-api-access-gpqxz\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.669994 4996 generic.go:334] "Generic (PLEG): container finished" podID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerID="0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0" exitCode=0 Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.670159 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpd48" event={"ID":"f6d1fda6-9673-42cb-b6c4-b4375f870bcb","Type":"ContainerDied","Data":"0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0"} Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.670184 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpd48" event={"ID":"f6d1fda6-9673-42cb-b6c4-b4375f870bcb","Type":"ContainerStarted","Data":"604a26a9d4fba1c0665c9eff9a91b4568d9c9f9ee81df9094be2018588eabfa7"} Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.685383 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.693646 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" event={"ID":"fc6d67fa-2340-4f11-bbbe-00edb743d45f","Type":"ContainerStarted","Data":"814d2a9b489fa830ccd85a7779e4da2a2529a9c7fa3a9f2dd03c432b1689e652"} Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.693715 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.751356 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" podStartSLOduration=4.751310115 podStartE2EDuration="4.751310115s" podCreationTimestamp="2026-02-28 09:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:33.702107766 +0000 UTC m=+177.392910597" watchObservedRunningTime="2026-02-28 09:03:33.751310115 +0000 UTC m=+177.442112926" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.763589 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-utilities\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.763694 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqxz\" (UniqueName: \"kubernetes.io/projected/50d13816-0091-4325-88fd-acac1435d7ea-kube-api-access-gpqxz\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.769844 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-utilities\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.772508 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-catalog-content\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.772974 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-catalog-content\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.808214 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqxz\" (UniqueName: \"kubernetes.io/projected/50d13816-0091-4325-88fd-acac1435d7ea-kube-api-access-gpqxz\") pod \"redhat-operators-ltn9t\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.813487 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" podStartSLOduration=4.813465261 podStartE2EDuration="4.813465261s" podCreationTimestamp="2026-02-28 09:03:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:03:33.795066134 +0000 UTC m=+177.485868955" watchObservedRunningTime="2026-02-28 09:03:33.813465261 +0000 UTC m=+177.504268072" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.813823 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:03:33 crc kubenswrapper[4996]: I0228 09:03:33.944666 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.026716 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vr24b"] Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.027939 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.032341 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vr24b"] Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.077378 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlppk\" (UniqueName: \"kubernetes.io/projected/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-kube-api-access-zlppk\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.077517 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-catalog-content\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.077553 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-utilities\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.091554 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.169382 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.178347 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66ktq\" (UniqueName: \"kubernetes.io/projected/dcafa6f4-ba55-465f-928c-71b3687abd21-kube-api-access-66ktq\") pod \"dcafa6f4-ba55-465f-928c-71b3687abd21\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.179290 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcafa6f4-ba55-465f-928c-71b3687abd21-secret-volume\") pod \"dcafa6f4-ba55-465f-928c-71b3687abd21\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.179321 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcafa6f4-ba55-465f-928c-71b3687abd21-config-volume\") pod \"dcafa6f4-ba55-465f-928c-71b3687abd21\" (UID: \"dcafa6f4-ba55-465f-928c-71b3687abd21\") " Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.179555 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlppk\" (UniqueName: \"kubernetes.io/projected/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-kube-api-access-zlppk\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.179672 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-catalog-content\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.180139 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-catalog-content\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.179705 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-utilities\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.180292 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-utilities\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.180633 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcafa6f4-ba55-465f-928c-71b3687abd21-config-volume" (OuterVolumeSpecName: "config-volume") pod "dcafa6f4-ba55-465f-928c-71b3687abd21" (UID: "dcafa6f4-ba55-465f-928c-71b3687abd21"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.204577 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcafa6f4-ba55-465f-928c-71b3687abd21-kube-api-access-66ktq" (OuterVolumeSpecName: "kube-api-access-66ktq") pod "dcafa6f4-ba55-465f-928c-71b3687abd21" (UID: "dcafa6f4-ba55-465f-928c-71b3687abd21"). InnerVolumeSpecName "kube-api-access-66ktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.204692 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcafa6f4-ba55-465f-928c-71b3687abd21-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dcafa6f4-ba55-465f-928c-71b3687abd21" (UID: "dcafa6f4-ba55-465f-928c-71b3687abd21"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.208289 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlppk\" (UniqueName: \"kubernetes.io/projected/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-kube-api-access-zlppk\") pod \"redhat-operators-vr24b\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.282821 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd50a361-94fd-49a2-b882-684982d99c45-kube-api-access\") pod \"fd50a361-94fd-49a2-b882-684982d99c45\" (UID: \"fd50a361-94fd-49a2-b882-684982d99c45\") " Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.283110 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd50a361-94fd-49a2-b882-684982d99c45-kubelet-dir\") pod \"fd50a361-94fd-49a2-b882-684982d99c45\" (UID: \"fd50a361-94fd-49a2-b882-684982d99c45\") " Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.283845 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66ktq\" (UniqueName: \"kubernetes.io/projected/dcafa6f4-ba55-465f-928c-71b3687abd21-kube-api-access-66ktq\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.283867 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dcafa6f4-ba55-465f-928c-71b3687abd21-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.283902 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcafa6f4-ba55-465f-928c-71b3687abd21-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.283972 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fd50a361-94fd-49a2-b882-684982d99c45-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fd50a361-94fd-49a2-b882-684982d99c45" (UID: "fd50a361-94fd-49a2-b882-684982d99c45"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.288633 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd50a361-94fd-49a2-b882-684982d99c45-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fd50a361-94fd-49a2-b882-684982d99c45" (UID: "fd50a361-94fd-49a2-b882-684982d99c45"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.345272 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.385546 4996 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd50a361-94fd-49a2-b882-684982d99c45-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.385590 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd50a361-94fd-49a2-b882-684982d99c45-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.520301 4996 patch_prober.go:28] interesting pod/router-default-5444994796-btfgz container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 09:03:34 crc kubenswrapper[4996]: [-]has-synced failed: reason withheld Feb 28 09:03:34 crc kubenswrapper[4996]: [+]process-running ok Feb 28 09:03:34 crc kubenswrapper[4996]: healthz check failed Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.520435 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-btfgz" podUID="209c9ded-078e-4147-8f4f-652dcc9be452" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.681681 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltn9t"] Feb 28 09:03:34 crc kubenswrapper[4996]: W0228 09:03:34.695133 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50d13816_0091_4325_88fd_acac1435d7ea.slice/crio-727b001dee271b23add17d611615359e0bdd495475bfc564b5c8d00394db2ef6 WatchSource:0}: Error finding container 727b001dee271b23add17d611615359e0bdd495475bfc564b5c8d00394db2ef6: Status 404 returned error can't find the container with id 727b001dee271b23add17d611615359e0bdd495475bfc564b5c8d00394db2ef6 Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.704445 4996 generic.go:334] "Generic (PLEG): container finished" podID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerID="63bdd2233124b0b0f0c1f2d3b7f8024384911a617616bca7764e927575d34fc6" exitCode=0 Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.704524 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6zg5" event={"ID":"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf","Type":"ContainerDied","Data":"63bdd2233124b0b0f0c1f2d3b7f8024384911a617616bca7764e927575d34fc6"} Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.704562 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6zg5" event={"ID":"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf","Type":"ContainerStarted","Data":"8168d5a6b2f40b3c021aa5b5eca58e043bc258796d253aa166014285f7a3b6ad"} Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.708980 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" event={"ID":"dcafa6f4-ba55-465f-928c-71b3687abd21","Type":"ContainerDied","Data":"791710bec75866d7b2b3389fe82cf587868bc77debcc29eefa833ae119e79cc2"} Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.709414 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791710bec75866d7b2b3389fe82cf587868bc77debcc29eefa833ae119e79cc2" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.709493 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.717605 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.728772 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"fd50a361-94fd-49a2-b882-684982d99c45","Type":"ContainerDied","Data":"a699f33ac2086ebee1772cbdc901030357e67ed99610cebc44d2eada26e8dcf7"} Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.728882 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a699f33ac2086ebee1772cbdc901030357e67ed99610cebc44d2eada26e8dcf7" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.798149 4996 ???:1] "http: TLS handshake error from 192.168.126.11:51032: no serving certificate available for the kubelet" Feb 28 09:03:34 crc kubenswrapper[4996]: I0228 09:03:34.950524 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vr24b"] Feb 28 09:03:34 crc kubenswrapper[4996]: W0228 09:03:34.983723 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d851deb_ec13_4fcf_a9fb_2ab4bebb6d9f.slice/crio-46e140a291ba76154e1b65d46c86348481c122bcefbdd09e059954cdf9ef4c2a WatchSource:0}: Error finding container 46e140a291ba76154e1b65d46c86348481c122bcefbdd09e059954cdf9ef4c2a: Status 404 returned error can't find the container with id 46e140a291ba76154e1b65d46c86348481c122bcefbdd09e059954cdf9ef4c2a Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.186307 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.310538 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kube-api-access\") pod \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\" (UID: \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\") " Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.311176 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kubelet-dir\") pod \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\" (UID: \"434c5b33-266d-4651-8fe2-a6cd0ac816c3\") " Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.311705 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "434c5b33-266d-4651-8fe2-a6cd0ac816c3" (UID: "434c5b33-266d-4651-8fe2-a6cd0ac816c3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.321601 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "434c5b33-266d-4651-8fe2-a6cd0ac816c3" (UID: "434c5b33-266d-4651-8fe2-a6cd0ac816c3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.412703 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.412746 4996 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/434c5b33-266d-4651-8fe2-a6cd0ac816c3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.517144 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.521369 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-btfgz" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.750442 4996 generic.go:334] "Generic (PLEG): container finished" podID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerID="d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad" exitCode=0 Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.750836 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr24b" event={"ID":"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f","Type":"ContainerDied","Data":"d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad"} Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.750965 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr24b" event={"ID":"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f","Type":"ContainerStarted","Data":"46e140a291ba76154e1b65d46c86348481c122bcefbdd09e059954cdf9ef4c2a"} Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.770408 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"434c5b33-266d-4651-8fe2-a6cd0ac816c3","Type":"ContainerDied","Data":"12b5b847fc3e978df8398a738108981d04da8beca64283d82ef8b83044f53dda"} Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.770460 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12b5b847fc3e978df8398a738108981d04da8beca64283d82ef8b83044f53dda" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.770585 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.775467 4996 generic.go:334] "Generic (PLEG): container finished" podID="50d13816-0091-4325-88fd-acac1435d7ea" containerID="1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776" exitCode=0 Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.775788 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltn9t" event={"ID":"50d13816-0091-4325-88fd-acac1435d7ea","Type":"ContainerDied","Data":"1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776"} Feb 28 09:03:35 crc kubenswrapper[4996]: I0228 09:03:35.776903 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltn9t" event={"ID":"50d13816-0091-4325-88fd-acac1435d7ea","Type":"ContainerStarted","Data":"727b001dee271b23add17d611615359e0bdd495475bfc564b5c8d00394db2ef6"} Feb 28 09:03:38 crc kubenswrapper[4996]: I0228 09:03:38.752410 4996 ???:1] "http: TLS handshake error from 192.168.126.11:51044: no serving certificate available for the kubelet" Feb 28 09:03:38 crc kubenswrapper[4996]: I0228 09:03:38.779368 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2hzpp" Feb 28 09:03:43 crc kubenswrapper[4996]: I0228 09:03:43.057142 4996 patch_prober.go:28] interesting pod/downloads-7954f5f757-4lxnc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 09:03:43 crc kubenswrapper[4996]: I0228 09:03:43.057636 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4lxnc" podUID="eb817855-81f3-4906-9fc8-6d4d02a8ca98" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 09:03:43 crc kubenswrapper[4996]: I0228 09:03:43.057187 4996 patch_prober.go:28] interesting pod/downloads-7954f5f757-4lxnc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 09:03:43 crc kubenswrapper[4996]: I0228 09:03:43.057742 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4lxnc" podUID="eb817855-81f3-4906-9fc8-6d4d02a8ca98" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 09:03:43 crc kubenswrapper[4996]: I0228 09:03:43.159141 4996 patch_prober.go:28] interesting pod/console-f9d7485db-v49dx container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Feb 28 09:03:43 crc kubenswrapper[4996]: I0228 09:03:43.159196 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-v49dx" podUID="ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Feb 28 09:03:49 crc kubenswrapper[4996]: I0228 09:03:49.164559 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c4cd55d74-s2542"] Feb 28 09:03:49 crc kubenswrapper[4996]: I0228 09:03:49.165880 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" podUID="7a869b0d-fae2-4507-9d38-acb8109ef67c" containerName="controller-manager" containerID="cri-o://13465e38b00f1363b1d524364b315248524fa07c257399a24c2dd5ae162211db" gracePeriod=30 Feb 28 09:03:49 crc kubenswrapper[4996]: I0228 09:03:49.189928 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f"] Feb 28 09:03:49 crc kubenswrapper[4996]: I0228 09:03:49.190182 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" podUID="fc6d67fa-2340-4f11-bbbe-00edb743d45f" containerName="route-controller-manager" containerID="cri-o://814d2a9b489fa830ccd85a7779e4da2a2529a9c7fa3a9f2dd03c432b1689e652" gracePeriod=30 Feb 28 09:03:50 crc kubenswrapper[4996]: I0228 09:03:50.710307 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:03:50 crc kubenswrapper[4996]: I0228 09:03:50.914268 4996 generic.go:334] "Generic (PLEG): container finished" podID="7a869b0d-fae2-4507-9d38-acb8109ef67c" containerID="13465e38b00f1363b1d524364b315248524fa07c257399a24c2dd5ae162211db" exitCode=0 Feb 28 09:03:50 crc kubenswrapper[4996]: I0228 09:03:50.914313 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" event={"ID":"7a869b0d-fae2-4507-9d38-acb8109ef67c","Type":"ContainerDied","Data":"13465e38b00f1363b1d524364b315248524fa07c257399a24c2dd5ae162211db"} Feb 28 09:03:52 crc kubenswrapper[4996]: I0228 09:03:52.174560 4996 patch_prober.go:28] interesting pod/controller-manager-7c4cd55d74-s2542 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 28 09:03:52 crc kubenswrapper[4996]: I0228 09:03:52.174816 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" podUID="7a869b0d-fae2-4507-9d38-acb8109ef67c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 28 09:03:52 crc kubenswrapper[4996]: I0228 09:03:52.181569 4996 patch_prober.go:28] interesting pod/route-controller-manager-84fc6f6768-5bb2f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 28 09:03:52 crc kubenswrapper[4996]: I0228 09:03:52.181651 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" podUID="fc6d67fa-2340-4f11-bbbe-00edb743d45f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 28 09:03:53 crc kubenswrapper[4996]: I0228 09:03:53.068130 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4lxnc" Feb 28 09:03:53 crc kubenswrapper[4996]: I0228 09:03:53.162114 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:53 crc kubenswrapper[4996]: I0228 09:03:53.167389 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:03:57 crc kubenswrapper[4996]: I0228 09:03:57.664133 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-84fc6f6768-5bb2f_fc6d67fa-2340-4f11-bbbe-00edb743d45f/route-controller-manager/0.log" Feb 28 09:03:57 crc kubenswrapper[4996]: I0228 09:03:57.664359 4996 generic.go:334] "Generic (PLEG): container finished" podID="fc6d67fa-2340-4f11-bbbe-00edb743d45f" containerID="814d2a9b489fa830ccd85a7779e4da2a2529a9c7fa3a9f2dd03c432b1689e652" exitCode=-1 Feb 28 09:03:57 crc kubenswrapper[4996]: I0228 09:03:57.664424 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" event={"ID":"fc6d67fa-2340-4f11-bbbe-00edb743d45f","Type":"ContainerDied","Data":"814d2a9b489fa830ccd85a7779e4da2a2529a9c7fa3a9f2dd03c432b1689e652"} Feb 28 09:03:59 crc kubenswrapper[4996]: I0228 09:03:59.256480 4996 ???:1] "http: TLS handshake error from 192.168.126.11:43742: no serving certificate available for the kubelet" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.171067 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537824-mtpqq"] Feb 28 09:04:00 crc kubenswrapper[4996]: E0228 09:04:00.171338 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd50a361-94fd-49a2-b882-684982d99c45" containerName="pruner" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.171354 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd50a361-94fd-49a2-b882-684982d99c45" containerName="pruner" Feb 28 09:04:00 crc kubenswrapper[4996]: E0228 09:04:00.171377 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434c5b33-266d-4651-8fe2-a6cd0ac816c3" containerName="pruner" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.171384 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="434c5b33-266d-4651-8fe2-a6cd0ac816c3" containerName="pruner" Feb 28 09:04:00 crc kubenswrapper[4996]: E0228 09:04:00.171394 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcafa6f4-ba55-465f-928c-71b3687abd21" containerName="collect-profiles" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.171420 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcafa6f4-ba55-465f-928c-71b3687abd21" containerName="collect-profiles" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.171543 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd50a361-94fd-49a2-b882-684982d99c45" containerName="pruner" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.171557 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="434c5b33-266d-4651-8fe2-a6cd0ac816c3" containerName="pruner" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.171565 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcafa6f4-ba55-465f-928c-71b3687abd21" containerName="collect-profiles" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.172107 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537824-mtpqq" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.174083 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.174322 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.177362 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.181840 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537824-mtpqq"] Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.254440 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqltg\" (UniqueName: \"kubernetes.io/projected/4222bc36-fe78-4dba-a558-af3b4fb70d56-kube-api-access-rqltg\") pod \"auto-csr-approver-29537824-mtpqq\" (UID: \"4222bc36-fe78-4dba-a558-af3b4fb70d56\") " pod="openshift-infra/auto-csr-approver-29537824-mtpqq" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.355788 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqltg\" (UniqueName: \"kubernetes.io/projected/4222bc36-fe78-4dba-a558-af3b4fb70d56-kube-api-access-rqltg\") pod \"auto-csr-approver-29537824-mtpqq\" (UID: \"4222bc36-fe78-4dba-a558-af3b4fb70d56\") " pod="openshift-infra/auto-csr-approver-29537824-mtpqq" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.383812 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqltg\" (UniqueName: \"kubernetes.io/projected/4222bc36-fe78-4dba-a558-af3b4fb70d56-kube-api-access-rqltg\") pod \"auto-csr-approver-29537824-mtpqq\" (UID: \"4222bc36-fe78-4dba-a558-af3b4fb70d56\") " pod="openshift-infra/auto-csr-approver-29537824-mtpqq" Feb 28 09:04:00 crc kubenswrapper[4996]: I0228 09:04:00.490887 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537824-mtpqq" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.174218 4996 patch_prober.go:28] interesting pod/controller-manager-7c4cd55d74-s2542 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.174274 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" podUID="7a869b0d-fae2-4507-9d38-acb8109ef67c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.711500 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" event={"ID":"fc6d67fa-2340-4f11-bbbe-00edb743d45f","Type":"ContainerDied","Data":"0d5862b73ae29654b707539ae79d0f5409dd123d40b15eb8bcc34763cd8ea605"} Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.711829 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5862b73ae29654b707539ae79d0f5409dd123d40b15eb8bcc34763cd8ea605" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.759287 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.798329 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z"] Feb 28 09:04:02 crc kubenswrapper[4996]: E0228 09:04:02.798596 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc6d67fa-2340-4f11-bbbe-00edb743d45f" containerName="route-controller-manager" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.798614 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc6d67fa-2340-4f11-bbbe-00edb743d45f" containerName="route-controller-manager" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.798745 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc6d67fa-2340-4f11-bbbe-00edb743d45f" containerName="route-controller-manager" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.799245 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.812419 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z"] Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.891423 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls9zr\" (UniqueName: \"kubernetes.io/projected/fc6d67fa-2340-4f11-bbbe-00edb743d45f-kube-api-access-ls9zr\") pod \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.891483 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-client-ca\") pod \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.891569 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6d67fa-2340-4f11-bbbe-00edb743d45f-serving-cert\") pod \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.891606 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-config\") pod \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\" (UID: \"fc6d67fa-2340-4f11-bbbe-00edb743d45f\") " Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.892651 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-client-ca" (OuterVolumeSpecName: "client-ca") pod "fc6d67fa-2340-4f11-bbbe-00edb743d45f" (UID: "fc6d67fa-2340-4f11-bbbe-00edb743d45f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.892877 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-config" (OuterVolumeSpecName: "config") pod "fc6d67fa-2340-4f11-bbbe-00edb743d45f" (UID: "fc6d67fa-2340-4f11-bbbe-00edb743d45f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.897382 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc6d67fa-2340-4f11-bbbe-00edb743d45f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fc6d67fa-2340-4f11-bbbe-00edb743d45f" (UID: "fc6d67fa-2340-4f11-bbbe-00edb743d45f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.897438 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc6d67fa-2340-4f11-bbbe-00edb743d45f-kube-api-access-ls9zr" (OuterVolumeSpecName: "kube-api-access-ls9zr") pod "fc6d67fa-2340-4f11-bbbe-00edb743d45f" (UID: "fc6d67fa-2340-4f11-bbbe-00edb743d45f"). InnerVolumeSpecName "kube-api-access-ls9zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.992952 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-config\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.993017 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks6nn\" (UniqueName: \"kubernetes.io/projected/5fd2369f-49af-428a-9a4d-a042548aba5c-kube-api-access-ks6nn\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.993110 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-client-ca\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.993162 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd2369f-49af-428a-9a4d-a042548aba5c-serving-cert\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.993210 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls9zr\" (UniqueName: \"kubernetes.io/projected/fc6d67fa-2340-4f11-bbbe-00edb743d45f-kube-api-access-ls9zr\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.993225 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.993238 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fc6d67fa-2340-4f11-bbbe-00edb743d45f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:02 crc kubenswrapper[4996]: I0228 09:04:02.993249 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc6d67fa-2340-4f11-bbbe-00edb743d45f-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.030688 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-whfkp" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.094259 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-client-ca\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.094309 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd2369f-49af-428a-9a4d-a042548aba5c-serving-cert\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.094413 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-config\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.094444 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks6nn\" (UniqueName: \"kubernetes.io/projected/5fd2369f-49af-428a-9a4d-a042548aba5c-kube-api-access-ks6nn\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.095365 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-client-ca\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.095507 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-config\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.098259 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd2369f-49af-428a-9a4d-a042548aba5c-serving-cert\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.114793 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks6nn\" (UniqueName: \"kubernetes.io/projected/5fd2369f-49af-428a-9a4d-a042548aba5c-kube-api-access-ks6nn\") pod \"route-controller-manager-6b85bf7cb6-qpm5z\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.119738 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.182448 4996 patch_prober.go:28] interesting pod/route-controller-manager-84fc6f6768-5bb2f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.182521 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" podUID="fc6d67fa-2340-4f11-bbbe-00edb743d45f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.519276 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.519935 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.522584 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.523077 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.528639 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.702901 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/664c71e0-d760-4986-ac0e-1d525c75b728-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"664c71e0-d760-4986-ac0e-1d525c75b728\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.703136 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/664c71e0-d760-4986-ac0e-1d525c75b728-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"664c71e0-d760-4986-ac0e-1d525c75b728\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.716630 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.733357 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f"] Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.736397 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84fc6f6768-5bb2f"] Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.804688 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/664c71e0-d760-4986-ac0e-1d525c75b728-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"664c71e0-d760-4986-ac0e-1d525c75b728\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.804745 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/664c71e0-d760-4986-ac0e-1d525c75b728-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"664c71e0-d760-4986-ac0e-1d525c75b728\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.804798 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/664c71e0-d760-4986-ac0e-1d525c75b728-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"664c71e0-d760-4986-ac0e-1d525c75b728\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.820513 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/664c71e0-d760-4986-ac0e-1d525c75b728-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"664c71e0-d760-4986-ac0e-1d525c75b728\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:03 crc kubenswrapper[4996]: I0228 09:04:03.841573 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.007446 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.009835 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.020241 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.047670 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.074259 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.109256 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.109435 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.109513 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.111755 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.112038 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.113357 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.122750 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.123412 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.363075 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 09:04:04 crc kubenswrapper[4996]: I0228 09:04:04.395063 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 09:04:05 crc kubenswrapper[4996]: I0228 09:04:05.048239 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc6d67fa-2340-4f11-bbbe-00edb743d45f" path="/var/lib/kubelet/pods/fc6d67fa-2340-4f11-bbbe-00edb743d45f/volumes" Feb 28 09:04:07 crc kubenswrapper[4996]: E0228 09:04:07.917363 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 09:04:07 crc kubenswrapper[4996]: E0228 09:04:07.918239 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bv4vs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lpd48_openshift-marketplace(f6d1fda6-9673-42cb-b6c4-b4375f870bcb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 09:04:07 crc kubenswrapper[4996]: E0228 09:04:07.920723 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lpd48" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" Feb 28 09:04:08 crc kubenswrapper[4996]: E0228 09:04:08.364204 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 09:04:08 crc kubenswrapper[4996]: E0228 09:04:08.364384 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75b78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r6zg5_openshift-marketplace(abea38f0-1d9a-4899-87bb-e9fbfbf2adaf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 09:04:08 crc kubenswrapper[4996]: E0228 09:04:08.365629 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r6zg5" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.208790 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z"] Feb 28 09:04:09 crc kubenswrapper[4996]: E0228 09:04:09.257084 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r6zg5" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" Feb 28 09:04:09 crc kubenswrapper[4996]: E0228 09:04:09.257142 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lpd48" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.308152 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.308813 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.317518 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 09:04:09 crc kubenswrapper[4996]: E0228 09:04:09.332958 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 09:04:09 crc kubenswrapper[4996]: E0228 09:04:09.333141 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzqfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8gkdr_openshift-marketplace(f16f2b07-0150-49db-af38-b617e3567070): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 09:04:09 crc kubenswrapper[4996]: E0228 09:04:09.334491 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8gkdr" podUID="f16f2b07-0150-49db-af38-b617e3567070" Feb 28 09:04:09 crc kubenswrapper[4996]: E0228 09:04:09.360328 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 09:04:09 crc kubenswrapper[4996]: E0228 09:04:09.360524 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpx56,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-w2899_openshift-marketplace(b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 09:04:09 crc kubenswrapper[4996]: E0228 09:04:09.361742 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-w2899" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.486715 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.487375 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kube-api-access\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.487462 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-var-lock\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.589065 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kube-api-access\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.589144 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-var-lock\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.589201 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.589299 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.589313 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-var-lock\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.606755 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kube-api-access\") pod \"installer-9-crc\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:09 crc kubenswrapper[4996]: I0228 09:04:09.627227 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:10 crc kubenswrapper[4996]: E0228 09:04:10.908880 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-w2899" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" Feb 28 09:04:10 crc kubenswrapper[4996]: E0228 09:04:10.908983 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8gkdr" podUID="f16f2b07-0150-49db-af38-b617e3567070" Feb 28 09:04:10 crc kubenswrapper[4996]: E0228 09:04:10.983653 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 09:04:10 crc kubenswrapper[4996]: E0228 09:04:10.983821 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jx77m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-b5j29_openshift-marketplace(f7793adf-e4f7-4a80-8f37-30df8cc8cdc7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 09:04:10 crc kubenswrapper[4996]: E0228 09:04:10.985245 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-b5j29" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" Feb 28 09:04:11 crc kubenswrapper[4996]: E0228 09:04:11.010134 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 09:04:11 crc kubenswrapper[4996]: E0228 09:04:11.010373 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj748,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-f5szk_openshift-marketplace(33a7d489-df52-4b28-90f9-9135da43486f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 09:04:11 crc kubenswrapper[4996]: E0228 09:04:11.012180 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-f5szk" podUID="33a7d489-df52-4b28-90f9-9135da43486f" Feb 28 09:04:12 crc kubenswrapper[4996]: I0228 09:04:12.249527 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:04:12 crc kubenswrapper[4996]: I0228 09:04:12.249850 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:04:13 crc kubenswrapper[4996]: I0228 09:04:13.174598 4996 patch_prober.go:28] interesting pod/controller-manager-7c4cd55d74-s2542 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 09:04:13 crc kubenswrapper[4996]: I0228 09:04:13.175219 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" podUID="7a869b0d-fae2-4507-9d38-acb8109ef67c" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 09:04:14 crc kubenswrapper[4996]: E0228 09:04:14.203556 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-f5szk" podUID="33a7d489-df52-4b28-90f9-9135da43486f" Feb 28 09:04:14 crc kubenswrapper[4996]: E0228 09:04:14.204207 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-b5j29" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.252422 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.313494 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-844866d9fb-77lkh"] Feb 28 09:04:14 crc kubenswrapper[4996]: E0228 09:04:14.313945 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a869b0d-fae2-4507-9d38-acb8109ef67c" containerName="controller-manager" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.313968 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a869b0d-fae2-4507-9d38-acb8109ef67c" containerName="controller-manager" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.314217 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a869b0d-fae2-4507-9d38-acb8109ef67c" containerName="controller-manager" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.314827 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.321416 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-844866d9fb-77lkh"] Feb 28 09:04:14 crc kubenswrapper[4996]: E0228 09:04:14.354989 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 09:04:14 crc kubenswrapper[4996]: E0228 09:04:14.355394 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zlppk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-vr24b_openshift-marketplace(5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 09:04:14 crc kubenswrapper[4996]: E0228 09:04:14.358031 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-vr24b" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.456903 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-proxy-ca-bundles\") pod \"7a869b0d-fae2-4507-9d38-acb8109ef67c\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457379 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-config\") pod \"7a869b0d-fae2-4507-9d38-acb8109ef67c\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457438 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/7a869b0d-fae2-4507-9d38-acb8109ef67c-kube-api-access-zjgvm\") pod \"7a869b0d-fae2-4507-9d38-acb8109ef67c\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457496 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a869b0d-fae2-4507-9d38-acb8109ef67c-serving-cert\") pod \"7a869b0d-fae2-4507-9d38-acb8109ef67c\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457527 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-client-ca\") pod \"7a869b0d-fae2-4507-9d38-acb8109ef67c\" (UID: \"7a869b0d-fae2-4507-9d38-acb8109ef67c\") " Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457724 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-client-ca\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457797 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwqpx\" (UniqueName: \"kubernetes.io/projected/aeef5b4d-f196-4e8e-9c27-09727adbeab8-kube-api-access-gwqpx\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457841 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-config\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457838 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a869b0d-fae2-4507-9d38-acb8109ef67c" (UID: "7a869b0d-fae2-4507-9d38-acb8109ef67c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457860 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-proxy-ca-bundles\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457932 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef5b4d-f196-4e8e-9c27-09727adbeab8-serving-cert\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.457983 4996 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.458928 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a869b0d-fae2-4507-9d38-acb8109ef67c" (UID: "7a869b0d-fae2-4507-9d38-acb8109ef67c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.458999 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-config" (OuterVolumeSpecName: "config") pod "7a869b0d-fae2-4507-9d38-acb8109ef67c" (UID: "7a869b0d-fae2-4507-9d38-acb8109ef67c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.465132 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a869b0d-fae2-4507-9d38-acb8109ef67c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a869b0d-fae2-4507-9d38-acb8109ef67c" (UID: "7a869b0d-fae2-4507-9d38-acb8109ef67c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.465244 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a869b0d-fae2-4507-9d38-acb8109ef67c-kube-api-access-zjgvm" (OuterVolumeSpecName: "kube-api-access-zjgvm") pod "7a869b0d-fae2-4507-9d38-acb8109ef67c" (UID: "7a869b0d-fae2-4507-9d38-acb8109ef67c"). InnerVolumeSpecName "kube-api-access-zjgvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559242 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwqpx\" (UniqueName: \"kubernetes.io/projected/aeef5b4d-f196-4e8e-9c27-09727adbeab8-kube-api-access-gwqpx\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559392 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-config\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559428 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-proxy-ca-bundles\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559469 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef5b4d-f196-4e8e-9c27-09727adbeab8-serving-cert\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559502 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-client-ca\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559552 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559566 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjgvm\" (UniqueName: \"kubernetes.io/projected/7a869b0d-fae2-4507-9d38-acb8109ef67c-kube-api-access-zjgvm\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559578 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a869b0d-fae2-4507-9d38-acb8109ef67c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.559590 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a869b0d-fae2-4507-9d38-acb8109ef67c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.561287 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-client-ca\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.562046 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-config\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.563083 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-proxy-ca-bundles\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.572783 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef5b4d-f196-4e8e-9c27-09727adbeab8-serving-cert\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.579942 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwqpx\" (UniqueName: \"kubernetes.io/projected/aeef5b4d-f196-4e8e-9c27-09727adbeab8-kube-api-access-gwqpx\") pod \"controller-manager-844866d9fb-77lkh\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.634674 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.784369 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" event={"ID":"7a869b0d-fae2-4507-9d38-acb8109ef67c","Type":"ContainerDied","Data":"088507f7a638e05806f3a0439778dec60646e88d67d3448839f42b29cd97654f"} Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.784430 4996 scope.go:117] "RemoveContainer" containerID="13465e38b00f1363b1d524364b315248524fa07c257399a24c2dd5ae162211db" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.784555 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7c4cd55d74-s2542" Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.793602 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltn9t" event={"ID":"50d13816-0091-4325-88fd-acac1435d7ea","Type":"ContainerStarted","Data":"9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d"} Feb 28 09:04:14 crc kubenswrapper[4996]: E0228 09:04:14.823167 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-vr24b" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" Feb 28 09:04:14 crc kubenswrapper[4996]: W0228 09:04:14.827642 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-aaa70bcf745aceb51f32f121520e8202a1bfc5e3ddb6de4a5c12889dacba747d WatchSource:0}: Error finding container aaa70bcf745aceb51f32f121520e8202a1bfc5e3ddb6de4a5c12889dacba747d: Status 404 returned error can't find the container with id aaa70bcf745aceb51f32f121520e8202a1bfc5e3ddb6de4a5c12889dacba747d Feb 28 09:04:14 crc kubenswrapper[4996]: W0228 09:04:14.847906 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-6914e04a7ad23175cc37a4168d057a50309b86340f7e8353b2e54ea2cb5f7109 WatchSource:0}: Error finding container 6914e04a7ad23175cc37a4168d057a50309b86340f7e8353b2e54ea2cb5f7109: Status 404 returned error can't find the container with id 6914e04a7ad23175cc37a4168d057a50309b86340f7e8353b2e54ea2cb5f7109 Feb 28 09:04:14 crc kubenswrapper[4996]: W0228 09:04:14.853213 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode84d7679_d97e_4591_a4a3_ea6e6bfae85b.slice/crio-6682e6a79f98863f076608790a9cf32bc805a7e5cd841723f0d22b302529d406 WatchSource:0}: Error finding container 6682e6a79f98863f076608790a9cf32bc805a7e5cd841723f0d22b302529d406: Status 404 returned error can't find the container with id 6682e6a79f98863f076608790a9cf32bc805a7e5cd841723f0d22b302529d406 Feb 28 09:04:14 crc kubenswrapper[4996]: W0228 09:04:14.857965 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaeef5b4d_f196_4e8e_9c27_09727adbeab8.slice/crio-1226ff0c099019a4ea95445ec8067bde511e35b7f7f3351a6ec781dd5c0a2a05 WatchSource:0}: Error finding container 1226ff0c099019a4ea95445ec8067bde511e35b7f7f3351a6ec781dd5c0a2a05: Status 404 returned error can't find the container with id 1226ff0c099019a4ea95445ec8067bde511e35b7f7f3351a6ec781dd5c0a2a05 Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.858958 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.868145 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-844866d9fb-77lkh"] Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.908443 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7c4cd55d74-s2542"] Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.918125 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z"] Feb 28 09:04:14 crc kubenswrapper[4996]: W0228 09:04:14.919736 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-9793f1f6bad7173c17a596a9849ee537cd4839bdc5bac89e27d187acd2ffb0fe WatchSource:0}: Error finding container 9793f1f6bad7173c17a596a9849ee537cd4839bdc5bac89e27d187acd2ffb0fe: Status 404 returned error can't find the container with id 9793f1f6bad7173c17a596a9849ee537cd4839bdc5bac89e27d187acd2ffb0fe Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.926112 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7c4cd55d74-s2542"] Feb 28 09:04:14 crc kubenswrapper[4996]: W0228 09:04:14.948727 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4222bc36_fe78_4dba_a558_af3b4fb70d56.slice/crio-5c35dc21b1a4d74933850a051614f3053e9ec983f5dedcd4c643b1f1da1e169d WatchSource:0}: Error finding container 5c35dc21b1a4d74933850a051614f3053e9ec983f5dedcd4c643b1f1da1e169d: Status 404 returned error can't find the container with id 5c35dc21b1a4d74933850a051614f3053e9ec983f5dedcd4c643b1f1da1e169d Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.953722 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 09:04:14 crc kubenswrapper[4996]: I0228 09:04:14.953977 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537824-mtpqq"] Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.043433 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a869b0d-fae2-4507-9d38-acb8109ef67c" path="/var/lib/kubelet/pods/7a869b0d-fae2-4507-9d38-acb8109ef67c/volumes" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.794565 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" event={"ID":"5fd2369f-49af-428a-9a4d-a042548aba5c","Type":"ContainerStarted","Data":"761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.795062 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.795084 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" event={"ID":"5fd2369f-49af-428a-9a4d-a042548aba5c","Type":"ContainerStarted","Data":"34606b079147f346243a43af50f4e9de391cbde2982a672a7eae56f2bad20f71"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.794708 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" podUID="5fd2369f-49af-428a-9a4d-a042548aba5c" containerName="route-controller-manager" containerID="cri-o://761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240" gracePeriod=30 Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.800449 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"664c71e0-d760-4986-ac0e-1d525c75b728","Type":"ContainerStarted","Data":"af14ac70cc3b0f52f549494353a473afd01dcd9959fa10f859c9f50cddd4e696"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.800491 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"664c71e0-d760-4986-ac0e-1d525c75b728","Type":"ContainerStarted","Data":"c06b334e4488be41546fd953fff2323219683c70137c07d4cee1361cf257442b"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.804281 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.805252 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e84d7679-d97e-4591-a4a3-ea6e6bfae85b","Type":"ContainerStarted","Data":"6449a8ba749509af653c06fc6f2f8a1051856e729fd46452d605ce2fa0a8309a"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.805301 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e84d7679-d97e-4591-a4a3-ea6e6bfae85b","Type":"ContainerStarted","Data":"6682e6a79f98863f076608790a9cf32bc805a7e5cd841723f0d22b302529d406"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.811817 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" podStartSLOduration=26.811803675 podStartE2EDuration="26.811803675s" podCreationTimestamp="2026-02-28 09:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:15.811414845 +0000 UTC m=+219.502217656" watchObservedRunningTime="2026-02-28 09:04:15.811803675 +0000 UTC m=+219.502606496" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.812664 4996 generic.go:334] "Generic (PLEG): container finished" podID="50d13816-0091-4325-88fd-acac1435d7ea" containerID="9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d" exitCode=0 Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.812754 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltn9t" event={"ID":"50d13816-0091-4325-88fd-acac1435d7ea","Type":"ContainerDied","Data":"9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.817061 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3adea456dff7527257818803060910465e104875c4762c7a04e5d7f4df77165f"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.817092 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"aaa70bcf745aceb51f32f121520e8202a1bfc5e3ddb6de4a5c12889dacba747d"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.820750 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" event={"ID":"aeef5b4d-f196-4e8e-9c27-09727adbeab8","Type":"ContainerStarted","Data":"509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.820773 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" event={"ID":"aeef5b4d-f196-4e8e-9c27-09727adbeab8","Type":"ContainerStarted","Data":"1226ff0c099019a4ea95445ec8067bde511e35b7f7f3351a6ec781dd5c0a2a05"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.821257 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.822989 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"23d924ef0ed4e23353c1aa1d396a35c4d2881ff1ca07494355d426a9d486c8a8"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.823033 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9793f1f6bad7173c17a596a9849ee537cd4839bdc5bac89e27d187acd2ffb0fe"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.824194 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.827175 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1bd1e4348cf1f491b7ebb1443d2bc586ca2cf0f7ce34e2218aae20bbfccb1525"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.827235 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"6914e04a7ad23175cc37a4168d057a50309b86340f7e8353b2e54ea2cb5f7109"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.829086 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.829205 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537824-mtpqq" event={"ID":"4222bc36-fe78-4dba-a558-af3b4fb70d56","Type":"ContainerStarted","Data":"5c35dc21b1a4d74933850a051614f3053e9ec983f5dedcd4c643b1f1da1e169d"} Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.830389 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=12.830370264999999 podStartE2EDuration="12.830370265s" podCreationTimestamp="2026-02-28 09:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:15.827739423 +0000 UTC m=+219.518542244" watchObservedRunningTime="2026-02-28 09:04:15.830370265 +0000 UTC m=+219.521173076" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.871268 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.871250546 podStartE2EDuration="6.871250546s" podCreationTimestamp="2026-02-28 09:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:15.870247303 +0000 UTC m=+219.561050114" watchObservedRunningTime="2026-02-28 09:04:15.871250546 +0000 UTC m=+219.562053367" Feb 28 09:04:15 crc kubenswrapper[4996]: I0228 09:04:15.891753 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" podStartSLOduration=6.891728313 podStartE2EDuration="6.891728313s" podCreationTimestamp="2026-02-28 09:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:15.890077664 +0000 UTC m=+219.580880495" watchObservedRunningTime="2026-02-28 09:04:15.891728313 +0000 UTC m=+219.582531154" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.234041 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.291366 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd2369f-49af-428a-9a4d-a042548aba5c-serving-cert\") pod \"5fd2369f-49af-428a-9a4d-a042548aba5c\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.291414 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-client-ca\") pod \"5fd2369f-49af-428a-9a4d-a042548aba5c\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.291469 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6nn\" (UniqueName: \"kubernetes.io/projected/5fd2369f-49af-428a-9a4d-a042548aba5c-kube-api-access-ks6nn\") pod \"5fd2369f-49af-428a-9a4d-a042548aba5c\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.291492 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-config\") pod \"5fd2369f-49af-428a-9a4d-a042548aba5c\" (UID: \"5fd2369f-49af-428a-9a4d-a042548aba5c\") " Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.292707 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-config" (OuterVolumeSpecName: "config") pod "5fd2369f-49af-428a-9a4d-a042548aba5c" (UID: "5fd2369f-49af-428a-9a4d-a042548aba5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.293903 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-client-ca" (OuterVolumeSpecName: "client-ca") pod "5fd2369f-49af-428a-9a4d-a042548aba5c" (UID: "5fd2369f-49af-428a-9a4d-a042548aba5c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.300288 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd2369f-49af-428a-9a4d-a042548aba5c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5fd2369f-49af-428a-9a4d-a042548aba5c" (UID: "5fd2369f-49af-428a-9a4d-a042548aba5c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.308126 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd2369f-49af-428a-9a4d-a042548aba5c-kube-api-access-ks6nn" (OuterVolumeSpecName: "kube-api-access-ks6nn") pod "5fd2369f-49af-428a-9a4d-a042548aba5c" (UID: "5fd2369f-49af-428a-9a4d-a042548aba5c"). InnerVolumeSpecName "kube-api-access-ks6nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.393960 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fd2369f-49af-428a-9a4d-a042548aba5c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.394041 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.394060 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ks6nn\" (UniqueName: \"kubernetes.io/projected/5fd2369f-49af-428a-9a4d-a042548aba5c-kube-api-access-ks6nn\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.394072 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fd2369f-49af-428a-9a4d-a042548aba5c-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.838516 4996 generic.go:334] "Generic (PLEG): container finished" podID="5fd2369f-49af-428a-9a4d-a042548aba5c" containerID="761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240" exitCode=0 Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.838654 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.838644 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" event={"ID":"5fd2369f-49af-428a-9a4d-a042548aba5c","Type":"ContainerDied","Data":"761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240"} Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.839075 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z" event={"ID":"5fd2369f-49af-428a-9a4d-a042548aba5c","Type":"ContainerDied","Data":"34606b079147f346243a43af50f4e9de391cbde2982a672a7eae56f2bad20f71"} Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.839118 4996 scope.go:117] "RemoveContainer" containerID="761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.842332 4996 generic.go:334] "Generic (PLEG): container finished" podID="664c71e0-d760-4986-ac0e-1d525c75b728" containerID="af14ac70cc3b0f52f549494353a473afd01dcd9959fa10f859c9f50cddd4e696" exitCode=0 Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.842491 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"664c71e0-d760-4986-ac0e-1d525c75b728","Type":"ContainerDied","Data":"af14ac70cc3b0f52f549494353a473afd01dcd9959fa10f859c9f50cddd4e696"} Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.844772 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltn9t" event={"ID":"50d13816-0091-4325-88fd-acac1435d7ea","Type":"ContainerStarted","Data":"1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3"} Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.854972 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq"] Feb 28 09:04:16 crc kubenswrapper[4996]: E0228 09:04:16.855229 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fd2369f-49af-428a-9a4d-a042548aba5c" containerName="route-controller-manager" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.855241 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fd2369f-49af-428a-9a4d-a042548aba5c" containerName="route-controller-manager" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.855349 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fd2369f-49af-428a-9a4d-a042548aba5c" containerName="route-controller-manager" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.856545 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.875694 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq"] Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.876086 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.876265 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.876521 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.876648 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.876887 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.878452 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.897399 4996 scope.go:117] "RemoveContainer" containerID="761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240" Feb 28 09:04:16 crc kubenswrapper[4996]: E0228 09:04:16.898466 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240\": container with ID starting with 761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240 not found: ID does not exist" containerID="761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.898501 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240"} err="failed to get container status \"761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240\": rpc error: code = NotFound desc = could not find container \"761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240\": container with ID starting with 761fcf6153f601989165212c42241b14b3ffeed219ceee4d2f43a8bc4837f240 not found: ID does not exist" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.908118 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ltn9t" podStartSLOduration=3.443203676 podStartE2EDuration="43.908097411s" podCreationTimestamp="2026-02-28 09:03:33 +0000 UTC" firstStartedPulling="2026-02-28 09:03:35.779048962 +0000 UTC m=+179.469851773" lastFinishedPulling="2026-02-28 09:04:16.243942697 +0000 UTC m=+219.934745508" observedRunningTime="2026-02-28 09:04:16.892959791 +0000 UTC m=+220.583762622" watchObservedRunningTime="2026-02-28 09:04:16.908097411 +0000 UTC m=+220.598900232" Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.939863 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z"] Feb 28 09:04:16 crc kubenswrapper[4996]: I0228 09:04:16.949230 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b85bf7cb6-qpm5z"] Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.015173 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp5d6\" (UniqueName: \"kubernetes.io/projected/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-kube-api-access-sp5d6\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.015444 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-client-ca\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.015942 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-config\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.015977 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-serving-cert\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.046698 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd2369f-49af-428a-9a4d-a042548aba5c" path="/var/lib/kubelet/pods/5fd2369f-49af-428a-9a4d-a042548aba5c/volumes" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.117044 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-config\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.117110 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-serving-cert\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.117137 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp5d6\" (UniqueName: \"kubernetes.io/projected/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-kube-api-access-sp5d6\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.117186 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-client-ca\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.118089 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-client-ca\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.119106 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-config\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.124423 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-serving-cert\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.137463 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp5d6\" (UniqueName: \"kubernetes.io/projected/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-kube-api-access-sp5d6\") pod \"route-controller-manager-7cbffff575-qwpzq\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.211721 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.644436 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq"] Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.854076 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" event={"ID":"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2","Type":"ContainerStarted","Data":"c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f"} Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.854508 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.854521 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" event={"ID":"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2","Type":"ContainerStarted","Data":"1d31213b609a53b9dcd5512de332849f53902035bca4793935a3b185bbe164dd"} Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.866298 4996 patch_prober.go:28] interesting pod/route-controller-manager-7cbffff575-qwpzq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.866355 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" podUID="fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Feb 28 09:04:17 crc kubenswrapper[4996]: I0228 09:04:17.876545 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" podStartSLOduration=8.87652546 podStartE2EDuration="8.87652546s" podCreationTimestamp="2026-02-28 09:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:17.873210711 +0000 UTC m=+221.564013542" watchObservedRunningTime="2026-02-28 09:04:17.87652546 +0000 UTC m=+221.567328281" Feb 28 09:04:18 crc kubenswrapper[4996]: I0228 09:04:18.869940 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.735548 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.889974 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"664c71e0-d760-4986-ac0e-1d525c75b728","Type":"ContainerDied","Data":"c06b334e4488be41546fd953fff2323219683c70137c07d4cee1361cf257442b"} Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.890403 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06b334e4488be41546fd953fff2323219683c70137c07d4cee1361cf257442b" Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.890369 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.897587 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/664c71e0-d760-4986-ac0e-1d525c75b728-kubelet-dir\") pod \"664c71e0-d760-4986-ac0e-1d525c75b728\" (UID: \"664c71e0-d760-4986-ac0e-1d525c75b728\") " Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.897726 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/664c71e0-d760-4986-ac0e-1d525c75b728-kube-api-access\") pod \"664c71e0-d760-4986-ac0e-1d525c75b728\" (UID: \"664c71e0-d760-4986-ac0e-1d525c75b728\") " Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.897747 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/664c71e0-d760-4986-ac0e-1d525c75b728-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "664c71e0-d760-4986-ac0e-1d525c75b728" (UID: "664c71e0-d760-4986-ac0e-1d525c75b728"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.898185 4996 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/664c71e0-d760-4986-ac0e-1d525c75b728-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.904334 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664c71e0-d760-4986-ac0e-1d525c75b728-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "664c71e0-d760-4986-ac0e-1d525c75b728" (UID: "664c71e0-d760-4986-ac0e-1d525c75b728"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:21 crc kubenswrapper[4996]: I0228 09:04:21.999370 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/664c71e0-d760-4986-ac0e-1d525c75b728-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:23 crc kubenswrapper[4996]: I0228 09:04:23.455455 4996 csr.go:261] certificate signing request csr-wm2hp is approved, waiting to be issued Feb 28 09:04:23 crc kubenswrapper[4996]: I0228 09:04:23.474051 4996 csr.go:257] certificate signing request csr-wm2hp is issued Feb 28 09:04:23 crc kubenswrapper[4996]: I0228 09:04:23.901535 4996 generic.go:334] "Generic (PLEG): container finished" podID="4222bc36-fe78-4dba-a558-af3b4fb70d56" containerID="a733d508e45d10950be3b8fa0e70d6e61c8806293fadd4e148b16f55b31fd497" exitCode=0 Feb 28 09:04:23 crc kubenswrapper[4996]: I0228 09:04:23.901624 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537824-mtpqq" event={"ID":"4222bc36-fe78-4dba-a558-af3b4fb70d56","Type":"ContainerDied","Data":"a733d508e45d10950be3b8fa0e70d6e61c8806293fadd4e148b16f55b31fd497"} Feb 28 09:04:23 crc kubenswrapper[4996]: I0228 09:04:23.907740 4996 generic.go:334] "Generic (PLEG): container finished" podID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerID="6f40ab5be9bd6478011c76a7ed9ac8c34cd67007f3d2ef4cbeec3479e3eabeea" exitCode=0 Feb 28 09:04:23 crc kubenswrapper[4996]: I0228 09:04:23.907794 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6zg5" event={"ID":"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf","Type":"ContainerDied","Data":"6f40ab5be9bd6478011c76a7ed9ac8c34cd67007f3d2ef4cbeec3479e3eabeea"} Feb 28 09:04:23 crc kubenswrapper[4996]: I0228 09:04:23.945275 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:04:23 crc kubenswrapper[4996]: I0228 09:04:23.945400 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:04:24 crc kubenswrapper[4996]: I0228 09:04:24.317877 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:04:24 crc kubenswrapper[4996]: I0228 09:04:24.475050 4996 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-05 09:37:24.868513248 +0000 UTC Feb 28 09:04:24 crc kubenswrapper[4996]: I0228 09:04:24.475096 4996 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6720h33m0.393419247s for next certificate rotation Feb 28 09:04:24 crc kubenswrapper[4996]: I0228 09:04:24.915078 4996 generic.go:334] "Generic (PLEG): container finished" podID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerID="b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2" exitCode=0 Feb 28 09:04:24 crc kubenswrapper[4996]: I0228 09:04:24.915157 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpd48" event={"ID":"f6d1fda6-9673-42cb-b6c4-b4375f870bcb","Type":"ContainerDied","Data":"b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2"} Feb 28 09:04:24 crc kubenswrapper[4996]: I0228 09:04:24.967786 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.273932 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537824-mtpqq" Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.443622 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqltg\" (UniqueName: \"kubernetes.io/projected/4222bc36-fe78-4dba-a558-af3b4fb70d56-kube-api-access-rqltg\") pod \"4222bc36-fe78-4dba-a558-af3b4fb70d56\" (UID: \"4222bc36-fe78-4dba-a558-af3b4fb70d56\") " Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.450191 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4222bc36-fe78-4dba-a558-af3b4fb70d56-kube-api-access-rqltg" (OuterVolumeSpecName: "kube-api-access-rqltg") pod "4222bc36-fe78-4dba-a558-af3b4fb70d56" (UID: "4222bc36-fe78-4dba-a558-af3b4fb70d56"). InnerVolumeSpecName "kube-api-access-rqltg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.475355 4996 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-12 01:30:06.710752304 +0000 UTC Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.475397 4996 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6880h25m41.235358166s for next certificate rotation Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.546891 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqltg\" (UniqueName: \"kubernetes.io/projected/4222bc36-fe78-4dba-a558-af3b4fb70d56-kube-api-access-rqltg\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.921943 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6zg5" event={"ID":"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf","Type":"ContainerStarted","Data":"bfd00ec4d2f7cbe07addde01b9428e45fc16b713511bf2259a893b5918220c51"} Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.923970 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537824-mtpqq" Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.923980 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537824-mtpqq" event={"ID":"4222bc36-fe78-4dba-a558-af3b4fb70d56","Type":"ContainerDied","Data":"5c35dc21b1a4d74933850a051614f3053e9ec983f5dedcd4c643b1f1da1e169d"} Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.924035 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c35dc21b1a4d74933850a051614f3053e9ec983f5dedcd4c643b1f1da1e169d" Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.925838 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gkdr" event={"ID":"f16f2b07-0150-49db-af38-b617e3567070","Type":"ContainerStarted","Data":"7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb"} Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.930191 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpd48" event={"ID":"f6d1fda6-9673-42cb-b6c4-b4375f870bcb","Type":"ContainerStarted","Data":"224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4"} Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.943221 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r6zg5" podStartSLOduration=3.66948116 podStartE2EDuration="53.943200968s" podCreationTimestamp="2026-02-28 09:03:32 +0000 UTC" firstStartedPulling="2026-02-28 09:03:34.706793876 +0000 UTC m=+178.397596687" lastFinishedPulling="2026-02-28 09:04:24.980513684 +0000 UTC m=+228.671316495" observedRunningTime="2026-02-28 09:04:25.942985173 +0000 UTC m=+229.633787974" watchObservedRunningTime="2026-02-28 09:04:25.943200968 +0000 UTC m=+229.634003779" Feb 28 09:04:25 crc kubenswrapper[4996]: I0228 09:04:25.979777 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lpd48" podStartSLOduration=2.23026966 podStartE2EDuration="53.979753956s" podCreationTimestamp="2026-02-28 09:03:32 +0000 UTC" firstStartedPulling="2026-02-28 09:03:33.672124264 +0000 UTC m=+177.362927075" lastFinishedPulling="2026-02-28 09:04:25.42160856 +0000 UTC m=+229.112411371" observedRunningTime="2026-02-28 09:04:25.978619119 +0000 UTC m=+229.669421930" watchObservedRunningTime="2026-02-28 09:04:25.979753956 +0000 UTC m=+229.670556777" Feb 28 09:04:26 crc kubenswrapper[4996]: I0228 09:04:26.938962 4996 generic.go:334] "Generic (PLEG): container finished" podID="f16f2b07-0150-49db-af38-b617e3567070" containerID="7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb" exitCode=0 Feb 28 09:04:26 crc kubenswrapper[4996]: I0228 09:04:26.939259 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gkdr" event={"ID":"f16f2b07-0150-49db-af38-b617e3567070","Type":"ContainerDied","Data":"7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb"} Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.135529 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-844866d9fb-77lkh"] Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.137142 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" podUID="aeef5b4d-f196-4e8e-9c27-09727adbeab8" containerName="controller-manager" containerID="cri-o://509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57" gracePeriod=30 Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.152501 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq"] Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.152838 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" podUID="fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" containerName="route-controller-manager" containerID="cri-o://c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f" gracePeriod=30 Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.571710 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.722641 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747520 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-config\") pod \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747598 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-client-ca\") pod \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747825 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-config\") pod \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747865 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-client-ca\") pod \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747890 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp5d6\" (UniqueName: \"kubernetes.io/projected/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-kube-api-access-sp5d6\") pod \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747918 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwqpx\" (UniqueName: \"kubernetes.io/projected/aeef5b4d-f196-4e8e-9c27-09727adbeab8-kube-api-access-gwqpx\") pod \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747944 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef5b4d-f196-4e8e-9c27-09727adbeab8-serving-cert\") pod \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747960 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-proxy-ca-bundles\") pod \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\" (UID: \"aeef5b4d-f196-4e8e-9c27-09727adbeab8\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.747989 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-serving-cert\") pod \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\" (UID: \"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2\") " Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.748542 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-client-ca" (OuterVolumeSpecName: "client-ca") pod "fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" (UID: "fe032e7f-c1bf-4d87-bc81-c8c45f03adc2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.748662 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-config" (OuterVolumeSpecName: "config") pod "fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" (UID: "fe032e7f-c1bf-4d87-bc81-c8c45f03adc2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.749571 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-client-ca" (OuterVolumeSpecName: "client-ca") pod "aeef5b4d-f196-4e8e-9c27-09727adbeab8" (UID: "aeef5b4d-f196-4e8e-9c27-09727adbeab8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.749611 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-config" (OuterVolumeSpecName: "config") pod "aeef5b4d-f196-4e8e-9c27-09727adbeab8" (UID: "aeef5b4d-f196-4e8e-9c27-09727adbeab8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.750024 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "aeef5b4d-f196-4e8e-9c27-09727adbeab8" (UID: "aeef5b4d-f196-4e8e-9c27-09727adbeab8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.753625 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeef5b4d-f196-4e8e-9c27-09727adbeab8-kube-api-access-gwqpx" (OuterVolumeSpecName: "kube-api-access-gwqpx") pod "aeef5b4d-f196-4e8e-9c27-09727adbeab8" (UID: "aeef5b4d-f196-4e8e-9c27-09727adbeab8"). InnerVolumeSpecName "kube-api-access-gwqpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.756252 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-kube-api-access-sp5d6" (OuterVolumeSpecName: "kube-api-access-sp5d6") pod "fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" (UID: "fe032e7f-c1bf-4d87-bc81-c8c45f03adc2"). InnerVolumeSpecName "kube-api-access-sp5d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.757792 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" (UID: "fe032e7f-c1bf-4d87-bc81-c8c45f03adc2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.771643 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeef5b4d-f196-4e8e-9c27-09727adbeab8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aeef5b4d-f196-4e8e-9c27-09727adbeab8" (UID: "aeef5b4d-f196-4e8e-9c27-09727adbeab8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849593 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeef5b4d-f196-4e8e-9c27-09727adbeab8-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849642 4996 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849658 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849671 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849709 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849717 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849724 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aeef5b4d-f196-4e8e-9c27-09727adbeab8-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849735 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp5d6\" (UniqueName: \"kubernetes.io/projected/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2-kube-api-access-sp5d6\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.849745 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwqpx\" (UniqueName: \"kubernetes.io/projected/aeef5b4d-f196-4e8e-9c27-09727adbeab8-kube-api-access-gwqpx\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.963829 4996 generic.go:334] "Generic (PLEG): container finished" podID="aeef5b4d-f196-4e8e-9c27-09727adbeab8" containerID="509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57" exitCode=0 Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.963949 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.963957 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" event={"ID":"aeef5b4d-f196-4e8e-9c27-09727adbeab8","Type":"ContainerDied","Data":"509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57"} Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.964466 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-844866d9fb-77lkh" event={"ID":"aeef5b4d-f196-4e8e-9c27-09727adbeab8","Type":"ContainerDied","Data":"1226ff0c099019a4ea95445ec8067bde511e35b7f7f3351a6ec781dd5c0a2a05"} Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.964489 4996 scope.go:117] "RemoveContainer" containerID="509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.966869 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5szk" event={"ID":"33a7d489-df52-4b28-90f9-9135da43486f","Type":"ContainerStarted","Data":"9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0"} Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.972529 4996 generic.go:334] "Generic (PLEG): container finished" podID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerID="bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389" exitCode=0 Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.972583 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2899" event={"ID":"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a","Type":"ContainerDied","Data":"bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389"} Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.979300 4996 scope.go:117] "RemoveContainer" containerID="509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.979474 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gkdr" event={"ID":"f16f2b07-0150-49db-af38-b617e3567070","Type":"ContainerStarted","Data":"52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3"} Feb 28 09:04:29 crc kubenswrapper[4996]: E0228 09:04:29.980052 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57\": container with ID starting with 509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57 not found: ID does not exist" containerID="509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.980119 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57"} err="failed to get container status \"509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57\": rpc error: code = NotFound desc = could not find container \"509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57\": container with ID starting with 509dc74976e5043ca6d2a69dab51c2edab2640e6f0d4fd89cbe0b88125bb2c57 not found: ID does not exist" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.982333 4996 generic.go:334] "Generic (PLEG): container finished" podID="fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" containerID="c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f" exitCode=0 Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.982408 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" event={"ID":"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2","Type":"ContainerDied","Data":"c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f"} Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.982442 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" event={"ID":"fe032e7f-c1bf-4d87-bc81-c8c45f03adc2","Type":"ContainerDied","Data":"1d31213b609a53b9dcd5512de332849f53902035bca4793935a3b185bbe164dd"} Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.982463 4996 scope.go:117] "RemoveContainer" containerID="c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.982414 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq" Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.985281 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr24b" event={"ID":"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f","Type":"ContainerStarted","Data":"c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f"} Feb 28 09:04:29 crc kubenswrapper[4996]: I0228 09:04:29.998507 4996 scope.go:117] "RemoveContainer" containerID="c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f" Feb 28 09:04:30 crc kubenswrapper[4996]: E0228 09:04:30.004327 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f\": container with ID starting with c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f not found: ID does not exist" containerID="c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.004384 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f"} err="failed to get container status \"c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f\": rpc error: code = NotFound desc = could not find container \"c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f\": container with ID starting with c10babd1a3ad78c611ee6d9aa6836e0f13277fa4d82dd7852935ac22930f9d6f not found: ID does not exist" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.016587 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-844866d9fb-77lkh"] Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.021901 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-844866d9fb-77lkh"] Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.025462 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8gkdr" podStartSLOduration=3.6879769700000002 podStartE2EDuration="1m0.025439658s" podCreationTimestamp="2026-02-28 09:03:30 +0000 UTC" firstStartedPulling="2026-02-28 09:03:32.515203138 +0000 UTC m=+176.206005969" lastFinishedPulling="2026-02-28 09:04:28.852665846 +0000 UTC m=+232.543468657" observedRunningTime="2026-02-28 09:04:30.022961859 +0000 UTC m=+233.713764680" watchObservedRunningTime="2026-02-28 09:04:30.025439658 +0000 UTC m=+233.716242469" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.077868 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq"] Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.082346 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbffff575-qwpzq"] Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.862541 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f4454fd4f-wv2t5"] Feb 28 09:04:30 crc kubenswrapper[4996]: E0228 09:04:30.863127 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664c71e0-d760-4986-ac0e-1d525c75b728" containerName="pruner" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863143 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="664c71e0-d760-4986-ac0e-1d525c75b728" containerName="pruner" Feb 28 09:04:30 crc kubenswrapper[4996]: E0228 09:04:30.863155 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeef5b4d-f196-4e8e-9c27-09727adbeab8" containerName="controller-manager" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863165 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeef5b4d-f196-4e8e-9c27-09727adbeab8" containerName="controller-manager" Feb 28 09:04:30 crc kubenswrapper[4996]: E0228 09:04:30.863187 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" containerName="route-controller-manager" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863196 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" containerName="route-controller-manager" Feb 28 09:04:30 crc kubenswrapper[4996]: E0228 09:04:30.863207 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4222bc36-fe78-4dba-a558-af3b4fb70d56" containerName="oc" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863215 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4222bc36-fe78-4dba-a558-af3b4fb70d56" containerName="oc" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863334 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeef5b4d-f196-4e8e-9c27-09727adbeab8" containerName="controller-manager" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863348 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" containerName="route-controller-manager" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863361 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4222bc36-fe78-4dba-a558-af3b4fb70d56" containerName="oc" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863372 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="664c71e0-d760-4986-ac0e-1d525c75b728" containerName="pruner" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.863867 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.865421 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.866977 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.867307 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.867360 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.867948 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.870366 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.873451 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w"] Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.874382 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.878660 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.879636 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.879679 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.881351 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.881439 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.881703 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.881851 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.890918 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4454fd4f-wv2t5"] Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.899955 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w"] Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.942024 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.942077 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.967676 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-proxy-ca-bundles\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.967719 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8274\" (UniqueName: \"kubernetes.io/projected/a877e62d-9f94-47e1-9d65-c56cb661deed-kube-api-access-r8274\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.967746 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a877e62d-9f94-47e1-9d65-c56cb661deed-serving-cert\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.967816 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-config\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.968133 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-client-ca\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.993205 4996 generic.go:334] "Generic (PLEG): container finished" podID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerID="d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3" exitCode=0 Feb 28 09:04:30 crc kubenswrapper[4996]: I0228 09:04:30.993294 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5j29" event={"ID":"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7","Type":"ContainerDied","Data":"d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3"} Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.000427 4996 generic.go:334] "Generic (PLEG): container finished" podID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerID="c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f" exitCode=0 Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.000483 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr24b" event={"ID":"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f","Type":"ContainerDied","Data":"c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f"} Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.010378 4996 generic.go:334] "Generic (PLEG): container finished" podID="33a7d489-df52-4b28-90f9-9135da43486f" containerID="9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0" exitCode=0 Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.010988 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5szk" event={"ID":"33a7d489-df52-4b28-90f9-9135da43486f","Type":"ContainerDied","Data":"9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0"} Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.051245 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeef5b4d-f196-4e8e-9c27-09727adbeab8" path="/var/lib/kubelet/pods/aeef5b4d-f196-4e8e-9c27-09727adbeab8/volumes" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.055333 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe032e7f-c1bf-4d87-bc81-c8c45f03adc2" path="/var/lib/kubelet/pods/fe032e7f-c1bf-4d87-bc81-c8c45f03adc2/volumes" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.070621 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5m92\" (UniqueName: \"kubernetes.io/projected/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-kube-api-access-p5m92\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.070697 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-client-ca\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.070752 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-client-ca\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.086457 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-client-ca\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.086811 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-config\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.086965 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-serving-cert\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.087059 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-proxy-ca-bundles\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.087103 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8274\" (UniqueName: \"kubernetes.io/projected/a877e62d-9f94-47e1-9d65-c56cb661deed-kube-api-access-r8274\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.087354 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a877e62d-9f94-47e1-9d65-c56cb661deed-serving-cert\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.087477 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-config\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.091286 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-config\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.091763 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-proxy-ca-bundles\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.095892 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a877e62d-9f94-47e1-9d65-c56cb661deed-serving-cert\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.120539 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8274\" (UniqueName: \"kubernetes.io/projected/a877e62d-9f94-47e1-9d65-c56cb661deed-kube-api-access-r8274\") pod \"controller-manager-f4454fd4f-wv2t5\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.180902 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.189280 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5m92\" (UniqueName: \"kubernetes.io/projected/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-kube-api-access-p5m92\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.189351 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-client-ca\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.189392 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-config\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.189424 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-serving-cert\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.191483 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-config\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.192457 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-client-ca\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.194710 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-serving-cert\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.213092 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5m92\" (UniqueName: \"kubernetes.io/projected/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-kube-api-access-p5m92\") pod \"route-controller-manager-54479b4bcc-jcs2w\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.376791 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f4454fd4f-wv2t5"] Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.498054 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.691704 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w"] Feb 28 09:04:31 crc kubenswrapper[4996]: W0228 09:04:31.698930 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bb5cbe6_3a05_4f58_aa17_f40b1cf486dc.slice/crio-ffba86f3e877f6bd5896acd9a9c5d691735fecbc3cc522d8ef90d188b738522c WatchSource:0}: Error finding container ffba86f3e877f6bd5896acd9a9c5d691735fecbc3cc522d8ef90d188b738522c: Status 404 returned error can't find the container with id ffba86f3e877f6bd5896acd9a9c5d691735fecbc3cc522d8ef90d188b738522c Feb 28 09:04:31 crc kubenswrapper[4996]: I0228 09:04:31.989126 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8gkdr" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="registry-server" probeResult="failure" output=< Feb 28 09:04:31 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 09:04:31 crc kubenswrapper[4996]: > Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.021963 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" event={"ID":"a877e62d-9f94-47e1-9d65-c56cb661deed","Type":"ContainerStarted","Data":"cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5"} Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.022047 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" event={"ID":"a877e62d-9f94-47e1-9d65-c56cb661deed","Type":"ContainerStarted","Data":"ac0f001cf34662df55e33d69c3f105e4d4460aa72b249680f5f273fb7f3bb8a5"} Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.023412 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.024555 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" event={"ID":"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc","Type":"ContainerStarted","Data":"8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f"} Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.024590 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" event={"ID":"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc","Type":"ContainerStarted","Data":"ffba86f3e877f6bd5896acd9a9c5d691735fecbc3cc522d8ef90d188b738522c"} Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.025291 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.028896 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.046880 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" podStartSLOduration=3.046857385 podStartE2EDuration="3.046857385s" podCreationTimestamp="2026-02-28 09:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:32.044626152 +0000 UTC m=+235.735429003" watchObservedRunningTime="2026-02-28 09:04:32.046857385 +0000 UTC m=+235.737660206" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.099221 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" podStartSLOduration=3.099199018 podStartE2EDuration="3.099199018s" podCreationTimestamp="2026-02-28 09:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:32.097638531 +0000 UTC m=+235.788441362" watchObservedRunningTime="2026-02-28 09:04:32.099199018 +0000 UTC m=+235.790001839" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.533168 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.564892 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.564957 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.621700 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.964982 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:04:32 crc kubenswrapper[4996]: I0228 09:04:32.965110 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:04:33 crc kubenswrapper[4996]: I0228 09:04:33.018486 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:04:33 crc kubenswrapper[4996]: I0228 09:04:33.054212 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2899" event={"ID":"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a","Type":"ContainerStarted","Data":"c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c"} Feb 28 09:04:33 crc kubenswrapper[4996]: I0228 09:04:33.079831 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2899" podStartSLOduration=2.269058462 podStartE2EDuration="1m3.079807767s" podCreationTimestamp="2026-02-28 09:03:30 +0000 UTC" firstStartedPulling="2026-02-28 09:03:31.457490528 +0000 UTC m=+175.148293339" lastFinishedPulling="2026-02-28 09:04:32.268239833 +0000 UTC m=+235.959042644" observedRunningTime="2026-02-28 09:04:33.077274787 +0000 UTC m=+236.768077598" watchObservedRunningTime="2026-02-28 09:04:33.079807767 +0000 UTC m=+236.770610588" Feb 28 09:04:33 crc kubenswrapper[4996]: I0228 09:04:33.080877 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:04:33 crc kubenswrapper[4996]: I0228 09:04:33.084185 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:04:34 crc kubenswrapper[4996]: I0228 09:04:34.067425 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5j29" event={"ID":"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7","Type":"ContainerStarted","Data":"ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede"} Feb 28 09:04:34 crc kubenswrapper[4996]: I0228 09:04:34.095551 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b5j29" podStartSLOduration=3.732285562 podStartE2EDuration="1m4.095526159s" podCreationTimestamp="2026-02-28 09:03:30 +0000 UTC" firstStartedPulling="2026-02-28 09:03:32.541783029 +0000 UTC m=+176.232585840" lastFinishedPulling="2026-02-28 09:04:32.905023626 +0000 UTC m=+236.595826437" observedRunningTime="2026-02-28 09:04:34.093731677 +0000 UTC m=+237.784534518" watchObservedRunningTime="2026-02-28 09:04:34.095526159 +0000 UTC m=+237.786328970" Feb 28 09:04:35 crc kubenswrapper[4996]: I0228 09:04:35.336078 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6zg5"] Feb 28 09:04:35 crc kubenswrapper[4996]: I0228 09:04:35.336528 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r6zg5" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerName="registry-server" containerID="cri-o://bfd00ec4d2f7cbe07addde01b9428e45fc16b713511bf2259a893b5918220c51" gracePeriod=2 Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.088340 4996 generic.go:334] "Generic (PLEG): container finished" podID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerID="bfd00ec4d2f7cbe07addde01b9428e45fc16b713511bf2259a893b5918220c51" exitCode=0 Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.088384 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6zg5" event={"ID":"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf","Type":"ContainerDied","Data":"bfd00ec4d2f7cbe07addde01b9428e45fc16b713511bf2259a893b5918220c51"} Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.090818 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5szk" event={"ID":"33a7d489-df52-4b28-90f9-9135da43486f","Type":"ContainerStarted","Data":"772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492"} Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.092798 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr24b" event={"ID":"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f","Type":"ContainerStarted","Data":"5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3"} Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.113339 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f5szk" podStartSLOduration=2.593452257 podStartE2EDuration="1m6.113320581s" podCreationTimestamp="2026-02-28 09:03:30 +0000 UTC" firstStartedPulling="2026-02-28 09:03:31.454984858 +0000 UTC m=+175.145787669" lastFinishedPulling="2026-02-28 09:04:34.974853152 +0000 UTC m=+238.665655993" observedRunningTime="2026-02-28 09:04:36.109988432 +0000 UTC m=+239.800791263" watchObservedRunningTime="2026-02-28 09:04:36.113320581 +0000 UTC m=+239.804123392" Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.134056 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vr24b" podStartSLOduration=4.3713524679999995 podStartE2EDuration="1m3.134006132s" podCreationTimestamp="2026-02-28 09:03:33 +0000 UTC" firstStartedPulling="2026-02-28 09:03:35.758560315 +0000 UTC m=+179.449363126" lastFinishedPulling="2026-02-28 09:04:34.521213979 +0000 UTC m=+238.212016790" observedRunningTime="2026-02-28 09:04:36.130334876 +0000 UTC m=+239.821137697" watchObservedRunningTime="2026-02-28 09:04:36.134006132 +0000 UTC m=+239.824808933" Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.579759 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.694712 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75b78\" (UniqueName: \"kubernetes.io/projected/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-kube-api-access-75b78\") pod \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.694807 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-utilities\") pod \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.694842 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-catalog-content\") pod \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\" (UID: \"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf\") " Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.696202 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-utilities" (OuterVolumeSpecName: "utilities") pod "abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" (UID: "abea38f0-1d9a-4899-87bb-e9fbfbf2adaf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.699602 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-kube-api-access-75b78" (OuterVolumeSpecName: "kube-api-access-75b78") pod "abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" (UID: "abea38f0-1d9a-4899-87bb-e9fbfbf2adaf"). InnerVolumeSpecName "kube-api-access-75b78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.734728 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" (UID: "abea38f0-1d9a-4899-87bb-e9fbfbf2adaf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.798020 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.798097 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75b78\" (UniqueName: \"kubernetes.io/projected/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-kube-api-access-75b78\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:36 crc kubenswrapper[4996]: I0228 09:04:36.798115 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:37 crc kubenswrapper[4996]: I0228 09:04:37.100955 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6zg5" event={"ID":"abea38f0-1d9a-4899-87bb-e9fbfbf2adaf","Type":"ContainerDied","Data":"8168d5a6b2f40b3c021aa5b5eca58e043bc258796d253aa166014285f7a3b6ad"} Feb 28 09:04:37 crc kubenswrapper[4996]: I0228 09:04:37.101032 4996 scope.go:117] "RemoveContainer" containerID="bfd00ec4d2f7cbe07addde01b9428e45fc16b713511bf2259a893b5918220c51" Feb 28 09:04:37 crc kubenswrapper[4996]: I0228 09:04:37.101282 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6zg5" Feb 28 09:04:37 crc kubenswrapper[4996]: I0228 09:04:37.119514 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6zg5"] Feb 28 09:04:37 crc kubenswrapper[4996]: I0228 09:04:37.120686 4996 scope.go:117] "RemoveContainer" containerID="6f40ab5be9bd6478011c76a7ed9ac8c34cd67007f3d2ef4cbeec3479e3eabeea" Feb 28 09:04:37 crc kubenswrapper[4996]: I0228 09:04:37.124027 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6zg5"] Feb 28 09:04:37 crc kubenswrapper[4996]: I0228 09:04:37.135645 4996 scope.go:117] "RemoveContainer" containerID="63bdd2233124b0b0f0c1f2d3b7f8024384911a617616bca7764e927575d34fc6" Feb 28 09:04:39 crc kubenswrapper[4996]: I0228 09:04:39.041904 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" path="/var/lib/kubelet/pods/abea38f0-1d9a-4899-87bb-e9fbfbf2adaf/volumes" Feb 28 09:04:40 crc kubenswrapper[4996]: I0228 09:04:40.571563 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2899" Feb 28 09:04:40 crc kubenswrapper[4996]: I0228 09:04:40.571675 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2899" Feb 28 09:04:40 crc kubenswrapper[4996]: I0228 09:04:40.639863 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2899" Feb 28 09:04:40 crc kubenswrapper[4996]: I0228 09:04:40.745894 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:04:40 crc kubenswrapper[4996]: I0228 09:04:40.747230 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:04:40 crc kubenswrapper[4996]: I0228 09:04:40.783577 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:04:40 crc kubenswrapper[4996]: I0228 09:04:40.994720 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:04:41 crc kubenswrapper[4996]: I0228 09:04:41.052697 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:04:41 crc kubenswrapper[4996]: I0228 09:04:41.158971 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:04:41 crc kubenswrapper[4996]: I0228 09:04:41.159085 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:04:41 crc kubenswrapper[4996]: I0228 09:04:41.189050 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2899" Feb 28 09:04:41 crc kubenswrapper[4996]: I0228 09:04:41.195683 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:04:41 crc kubenswrapper[4996]: I0228 09:04:41.227807 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:04:41 crc kubenswrapper[4996]: I0228 09:04:41.736643 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gkdr"] Feb 28 09:04:42 crc kubenswrapper[4996]: I0228 09:04:42.141550 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8gkdr" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="registry-server" containerID="cri-o://52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3" gracePeriod=2 Feb 28 09:04:42 crc kubenswrapper[4996]: I0228 09:04:42.183446 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:04:42 crc kubenswrapper[4996]: I0228 09:04:42.249054 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:04:42 crc kubenswrapper[4996]: I0228 09:04:42.249128 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:04:43 crc kubenswrapper[4996]: I0228 09:04:43.138648 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5j29"] Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.086851 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.155869 4996 generic.go:334] "Generic (PLEG): container finished" podID="f16f2b07-0150-49db-af38-b617e3567070" containerID="52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3" exitCode=0 Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.155935 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gkdr" event={"ID":"f16f2b07-0150-49db-af38-b617e3567070","Type":"ContainerDied","Data":"52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3"} Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.156032 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gkdr" event={"ID":"f16f2b07-0150-49db-af38-b617e3567070","Type":"ContainerDied","Data":"dd5a067ed1162a191c8cfe818e6e65de804d205a659eb8eee2dde8e1e802532a"} Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.156066 4996 scope.go:117] "RemoveContainer" containerID="52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.156135 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b5j29" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerName="registry-server" containerID="cri-o://ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede" gracePeriod=2 Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.156953 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gkdr" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.179616 4996 scope.go:117] "RemoveContainer" containerID="7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.194043 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-utilities\") pod \"f16f2b07-0150-49db-af38-b617e3567070\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.194102 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzqfh\" (UniqueName: \"kubernetes.io/projected/f16f2b07-0150-49db-af38-b617e3567070-kube-api-access-fzqfh\") pod \"f16f2b07-0150-49db-af38-b617e3567070\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.194178 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-catalog-content\") pod \"f16f2b07-0150-49db-af38-b617e3567070\" (UID: \"f16f2b07-0150-49db-af38-b617e3567070\") " Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.195238 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-utilities" (OuterVolumeSpecName: "utilities") pod "f16f2b07-0150-49db-af38-b617e3567070" (UID: "f16f2b07-0150-49db-af38-b617e3567070"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.195999 4996 scope.go:117] "RemoveContainer" containerID="3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.199960 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16f2b07-0150-49db-af38-b617e3567070-kube-api-access-fzqfh" (OuterVolumeSpecName: "kube-api-access-fzqfh") pod "f16f2b07-0150-49db-af38-b617e3567070" (UID: "f16f2b07-0150-49db-af38-b617e3567070"). InnerVolumeSpecName "kube-api-access-fzqfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.246084 4996 scope.go:117] "RemoveContainer" containerID="52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3" Feb 28 09:04:44 crc kubenswrapper[4996]: E0228 09:04:44.246491 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3\": container with ID starting with 52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3 not found: ID does not exist" containerID="52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.246535 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3"} err="failed to get container status \"52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3\": rpc error: code = NotFound desc = could not find container \"52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3\": container with ID starting with 52e0d80cce7b500730460bc6a89b4f231e939082a5474f05d0e21748963f21b3 not found: ID does not exist" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.246559 4996 scope.go:117] "RemoveContainer" containerID="7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb" Feb 28 09:04:44 crc kubenswrapper[4996]: E0228 09:04:44.247663 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb\": container with ID starting with 7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb not found: ID does not exist" containerID="7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.247852 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb"} err="failed to get container status \"7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb\": rpc error: code = NotFound desc = could not find container \"7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb\": container with ID starting with 7a8b682596604007551d0a04e1d81d122ed1c4685bde60a12fef0276c2dfd3fb not found: ID does not exist" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.247995 4996 scope.go:117] "RemoveContainer" containerID="3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b" Feb 28 09:04:44 crc kubenswrapper[4996]: E0228 09:04:44.248504 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b\": container with ID starting with 3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b not found: ID does not exist" containerID="3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.248537 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b"} err="failed to get container status \"3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b\": rpc error: code = NotFound desc = could not find container \"3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b\": container with ID starting with 3d737f2f1afb7d7445dbd47086313d0821520275f3104417c50e13748606b86b not found: ID does not exist" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.252682 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f16f2b07-0150-49db-af38-b617e3567070" (UID: "f16f2b07-0150-49db-af38-b617e3567070"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.295799 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.295855 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f2b07-0150-49db-af38-b617e3567070-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.295878 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzqfh\" (UniqueName: \"kubernetes.io/projected/f16f2b07-0150-49db-af38-b617e3567070-kube-api-access-fzqfh\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.347201 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.347258 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.393628 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.486490 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gkdr"] Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.493250 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8gkdr"] Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.644940 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.801748 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-catalog-content\") pod \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.801788 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx77m\" (UniqueName: \"kubernetes.io/projected/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-kube-api-access-jx77m\") pod \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.801813 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-utilities\") pod \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\" (UID: \"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7\") " Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.802575 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-utilities" (OuterVolumeSpecName: "utilities") pod "f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" (UID: "f7793adf-e4f7-4a80-8f37-30df8cc8cdc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.807173 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-kube-api-access-jx77m" (OuterVolumeSpecName: "kube-api-access-jx77m") pod "f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" (UID: "f7793adf-e4f7-4a80-8f37-30df8cc8cdc7"). InnerVolumeSpecName "kube-api-access-jx77m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.854436 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" (UID: "f7793adf-e4f7-4a80-8f37-30df8cc8cdc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.903151 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.903187 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx77m\" (UniqueName: \"kubernetes.io/projected/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-kube-api-access-jx77m\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:44 crc kubenswrapper[4996]: I0228 09:04:44.903208 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.044149 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f16f2b07-0150-49db-af38-b617e3567070" path="/var/lib/kubelet/pods/f16f2b07-0150-49db-af38-b617e3567070/volumes" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.165232 4996 generic.go:334] "Generic (PLEG): container finished" podID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerID="ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede" exitCode=0 Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.165457 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5j29" event={"ID":"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7","Type":"ContainerDied","Data":"ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede"} Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.165538 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5j29" event={"ID":"f7793adf-e4f7-4a80-8f37-30df8cc8cdc7","Type":"ContainerDied","Data":"e8c52edaaefd7dba6ec9b9f1c519a30791e7a7f07a0924d4b1971cfd7efe4f0f"} Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.165572 4996 scope.go:117] "RemoveContainer" containerID="ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.165642 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5j29" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.190476 4996 scope.go:117] "RemoveContainer" containerID="d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.195219 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5j29"] Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.202464 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b5j29"] Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.216213 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.219589 4996 scope.go:117] "RemoveContainer" containerID="00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.257823 4996 scope.go:117] "RemoveContainer" containerID="ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede" Feb 28 09:04:45 crc kubenswrapper[4996]: E0228 09:04:45.258595 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede\": container with ID starting with ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede not found: ID does not exist" containerID="ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.258675 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede"} err="failed to get container status \"ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede\": rpc error: code = NotFound desc = could not find container \"ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede\": container with ID starting with ed9f960016b4917662f09ae8d54def4e9f2fc3f5ef91abfad1255e94e5f04ede not found: ID does not exist" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.258726 4996 scope.go:117] "RemoveContainer" containerID="d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3" Feb 28 09:04:45 crc kubenswrapper[4996]: E0228 09:04:45.259520 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3\": container with ID starting with d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3 not found: ID does not exist" containerID="d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.259605 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3"} err="failed to get container status \"d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3\": rpc error: code = NotFound desc = could not find container \"d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3\": container with ID starting with d1de6ca42f90fd55d8bc55f73139f05e387758e81ec8d7b1cc1e4606731cc7f3 not found: ID does not exist" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.259652 4996 scope.go:117] "RemoveContainer" containerID="00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc" Feb 28 09:04:45 crc kubenswrapper[4996]: E0228 09:04:45.260146 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc\": container with ID starting with 00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc not found: ID does not exist" containerID="00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc" Feb 28 09:04:45 crc kubenswrapper[4996]: I0228 09:04:45.260184 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc"} err="failed to get container status \"00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc\": rpc error: code = NotFound desc = could not find container \"00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc\": container with ID starting with 00bf176e7b6c57df2b41079bc212c9eccc467b1911ee1f09f9c6b467070bd2bc not found: ID does not exist" Feb 28 09:04:47 crc kubenswrapper[4996]: I0228 09:04:47.040279 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" path="/var/lib/kubelet/pods/f7793adf-e4f7-4a80-8f37-30df8cc8cdc7/volumes" Feb 28 09:04:47 crc kubenswrapper[4996]: I0228 09:04:47.548630 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vr24b"] Feb 28 09:04:47 crc kubenswrapper[4996]: I0228 09:04:47.549161 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vr24b" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerName="registry-server" containerID="cri-o://5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3" gracePeriod=2 Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.022780 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.045738 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-utilities\") pod \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.045800 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlppk\" (UniqueName: \"kubernetes.io/projected/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-kube-api-access-zlppk\") pod \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.045830 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-catalog-content\") pod \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\" (UID: \"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f\") " Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.046692 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-utilities" (OuterVolumeSpecName: "utilities") pod "5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" (UID: "5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.047046 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.054738 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-kube-api-access-zlppk" (OuterVolumeSpecName: "kube-api-access-zlppk") pod "5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" (UID: "5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f"). InnerVolumeSpecName "kube-api-access-zlppk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.147506 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlppk\" (UniqueName: \"kubernetes.io/projected/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-kube-api-access-zlppk\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.175673 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" (UID: "5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.188086 4996 generic.go:334] "Generic (PLEG): container finished" podID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerID="5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3" exitCode=0 Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.188135 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr24b" event={"ID":"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f","Type":"ContainerDied","Data":"5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3"} Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.188167 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vr24b" event={"ID":"5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f","Type":"ContainerDied","Data":"46e140a291ba76154e1b65d46c86348481c122bcefbdd09e059954cdf9ef4c2a"} Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.188186 4996 scope.go:117] "RemoveContainer" containerID="5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.188321 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vr24b" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.211421 4996 scope.go:117] "RemoveContainer" containerID="c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.216067 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vr24b"] Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.223636 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vr24b"] Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.241581 4996 scope.go:117] "RemoveContainer" containerID="d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.252440 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.256686 4996 scope.go:117] "RemoveContainer" containerID="5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3" Feb 28 09:04:48 crc kubenswrapper[4996]: E0228 09:04:48.257220 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3\": container with ID starting with 5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3 not found: ID does not exist" containerID="5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.257263 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3"} err="failed to get container status \"5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3\": rpc error: code = NotFound desc = could not find container \"5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3\": container with ID starting with 5df25356d392e198fb9936ce9a132638bdc65ba4ed770d94d3bc171d61cea1c3 not found: ID does not exist" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.257290 4996 scope.go:117] "RemoveContainer" containerID="c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f" Feb 28 09:04:48 crc kubenswrapper[4996]: E0228 09:04:48.257531 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f\": container with ID starting with c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f not found: ID does not exist" containerID="c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.257559 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f"} err="failed to get container status \"c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f\": rpc error: code = NotFound desc = could not find container \"c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f\": container with ID starting with c38ed9ef34542e32aed52a62cc0c1c6c28c2d307328c44681986e6f67476523f not found: ID does not exist" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.257577 4996 scope.go:117] "RemoveContainer" containerID="d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad" Feb 28 09:04:48 crc kubenswrapper[4996]: E0228 09:04:48.258293 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad\": container with ID starting with d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad not found: ID does not exist" containerID="d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad" Feb 28 09:04:48 crc kubenswrapper[4996]: I0228 09:04:48.258323 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad"} err="failed to get container status \"d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad\": rpc error: code = NotFound desc = could not find container \"d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad\": container with ID starting with d66789584b538b7cf57d43aeb32d83ddb2c25f7bbd24d54173c25bbef8da1aad not found: ID does not exist" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.040815 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" path="/var/lib/kubelet/pods/5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f/volumes" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.133437 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4454fd4f-wv2t5"] Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.133629 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" podUID="a877e62d-9f94-47e1-9d65-c56cb661deed" containerName="controller-manager" containerID="cri-o://cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5" gracePeriod=30 Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.233145 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w"] Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.233334 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" podUID="5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" containerName="route-controller-manager" containerID="cri-o://8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f" gracePeriod=30 Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.660813 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.666869 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.669827 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5m92\" (UniqueName: \"kubernetes.io/projected/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-kube-api-access-p5m92\") pod \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.669866 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-client-ca\") pod \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.669912 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a877e62d-9f94-47e1-9d65-c56cb661deed-serving-cert\") pod \"a877e62d-9f94-47e1-9d65-c56cb661deed\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.669933 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-config\") pod \"a877e62d-9f94-47e1-9d65-c56cb661deed\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.669960 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-serving-cert\") pod \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.669983 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-client-ca\") pod \"a877e62d-9f94-47e1-9d65-c56cb661deed\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.670015 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-config\") pod \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\" (UID: \"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.670036 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8274\" (UniqueName: \"kubernetes.io/projected/a877e62d-9f94-47e1-9d65-c56cb661deed-kube-api-access-r8274\") pod \"a877e62d-9f94-47e1-9d65-c56cb661deed\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.670055 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-proxy-ca-bundles\") pod \"a877e62d-9f94-47e1-9d65-c56cb661deed\" (UID: \"a877e62d-9f94-47e1-9d65-c56cb661deed\") " Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.670830 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-client-ca" (OuterVolumeSpecName: "client-ca") pod "5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" (UID: "5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.670882 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-config" (OuterVolumeSpecName: "config") pod "5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" (UID: "5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.670912 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-config" (OuterVolumeSpecName: "config") pod "a877e62d-9f94-47e1-9d65-c56cb661deed" (UID: "a877e62d-9f94-47e1-9d65-c56cb661deed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.670929 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-client-ca" (OuterVolumeSpecName: "client-ca") pod "a877e62d-9f94-47e1-9d65-c56cb661deed" (UID: "a877e62d-9f94-47e1-9d65-c56cb661deed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.671292 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a877e62d-9f94-47e1-9d65-c56cb661deed" (UID: "a877e62d-9f94-47e1-9d65-c56cb661deed"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.675685 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" (UID: "5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.675696 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a877e62d-9f94-47e1-9d65-c56cb661deed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a877e62d-9f94-47e1-9d65-c56cb661deed" (UID: "a877e62d-9f94-47e1-9d65-c56cb661deed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.676432 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-kube-api-access-p5m92" (OuterVolumeSpecName: "kube-api-access-p5m92") pod "5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" (UID: "5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc"). InnerVolumeSpecName "kube-api-access-p5m92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.676782 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a877e62d-9f94-47e1-9d65-c56cb661deed-kube-api-access-r8274" (OuterVolumeSpecName: "kube-api-access-r8274") pod "a877e62d-9f94-47e1-9d65-c56cb661deed" (UID: "a877e62d-9f94-47e1-9d65-c56cb661deed"). InnerVolumeSpecName "kube-api-access-r8274". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771236 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771272 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771282 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771293 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771303 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8274\" (UniqueName: \"kubernetes.io/projected/a877e62d-9f94-47e1-9d65-c56cb661deed-kube-api-access-r8274\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771314 4996 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a877e62d-9f94-47e1-9d65-c56cb661deed-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771323 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5m92\" (UniqueName: \"kubernetes.io/projected/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-kube-api-access-p5m92\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771331 4996 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:49 crc kubenswrapper[4996]: I0228 09:04:49.771339 4996 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a877e62d-9f94-47e1-9d65-c56cb661deed-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.200087 4996 generic.go:334] "Generic (PLEG): container finished" podID="5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" containerID="8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f" exitCode=0 Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.200134 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" event={"ID":"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc","Type":"ContainerDied","Data":"8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f"} Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.200183 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.200211 4996 scope.go:117] "RemoveContainer" containerID="8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.200198 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w" event={"ID":"5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc","Type":"ContainerDied","Data":"ffba86f3e877f6bd5896acd9a9c5d691735fecbc3cc522d8ef90d188b738522c"} Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.202383 4996 generic.go:334] "Generic (PLEG): container finished" podID="a877e62d-9f94-47e1-9d65-c56cb661deed" containerID="cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5" exitCode=0 Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.202415 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" event={"ID":"a877e62d-9f94-47e1-9d65-c56cb661deed","Type":"ContainerDied","Data":"cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5"} Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.202459 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" event={"ID":"a877e62d-9f94-47e1-9d65-c56cb661deed","Type":"ContainerDied","Data":"ac0f001cf34662df55e33d69c3f105e4d4460aa72b249680f5f273fb7f3bb8a5"} Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.202432 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f4454fd4f-wv2t5" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.222206 4996 scope.go:117] "RemoveContainer" containerID="8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.222730 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f\": container with ID starting with 8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f not found: ID does not exist" containerID="8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.222781 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f"} err="failed to get container status \"8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f\": rpc error: code = NotFound desc = could not find container \"8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f\": container with ID starting with 8474798eb6e43272bc6ba67810a6d99b80146432ec10a3b36699176bfe35e82f not found: ID does not exist" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.222811 4996 scope.go:117] "RemoveContainer" containerID="cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.235529 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f4454fd4f-wv2t5"] Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.239166 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f4454fd4f-wv2t5"] Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.241619 4996 scope.go:117] "RemoveContainer" containerID="cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.242377 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5\": container with ID starting with cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5 not found: ID does not exist" containerID="cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.242429 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5"} err="failed to get container status \"cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5\": rpc error: code = NotFound desc = could not find container \"cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5\": container with ID starting with cf65777e50c48cc12d059b58babc58adc7797a1e4e8fcdeca8683a6b734879d5 not found: ID does not exist" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.247976 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w"] Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.256074 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54479b4bcc-jcs2w"] Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.886402 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l"] Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887025 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887091 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887113 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerName="extract-content" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887125 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerName="extract-content" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887142 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerName="extract-utilities" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887154 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerName="extract-utilities" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887175 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887185 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887196 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="extract-content" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887206 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="extract-content" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887218 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" containerName="route-controller-manager" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887226 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" containerName="route-controller-manager" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887235 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="extract-utilities" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887243 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="extract-utilities" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887255 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887263 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887277 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerName="extract-utilities" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887285 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerName="extract-utilities" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887295 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerName="extract-content" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887303 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerName="extract-content" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887315 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerName="extract-utilities" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887323 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerName="extract-utilities" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887332 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerName="extract-content" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887340 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerName="extract-content" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887353 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887361 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: E0228 09:04:50.887374 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a877e62d-9f94-47e1-9d65-c56cb661deed" containerName="controller-manager" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887382 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a877e62d-9f94-47e1-9d65-c56cb661deed" containerName="controller-manager" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887497 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a877e62d-9f94-47e1-9d65-c56cb661deed" containerName="controller-manager" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887511 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d851deb-ec13-4fcf-a9fb-2ab4bebb6d9f" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887525 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="abea38f0-1d9a-4899-87bb-e9fbfbf2adaf" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887541 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f16f2b07-0150-49db-af38-b617e3567070" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887552 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" containerName="route-controller-manager" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.887563 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7793adf-e4f7-4a80-8f37-30df8cc8cdc7" containerName="registry-server" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.888048 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.889468 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-585dcdcbd4-dds9m"] Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.889910 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.890522 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.890599 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.891098 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.896531 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l"] Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.899059 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.899319 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.899526 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.899634 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.899719 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.899719 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.901267 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.902768 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.903406 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585dcdcbd4-dds9m"] Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.903709 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.909808 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986189 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxzkz\" (UniqueName: \"kubernetes.io/projected/8c9cf642-4804-4c65-9067-0a5311aa057a-kube-api-access-hxzkz\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986254 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7906df00-fe22-409d-8a7d-82c2f8aef6bb-config\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986365 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7906df00-fe22-409d-8a7d-82c2f8aef6bb-serving-cert\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986400 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c9cf642-4804-4c65-9067-0a5311aa057a-serving-cert\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986427 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzckr\" (UniqueName: \"kubernetes.io/projected/7906df00-fe22-409d-8a7d-82c2f8aef6bb-kube-api-access-qzckr\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986506 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7906df00-fe22-409d-8a7d-82c2f8aef6bb-client-ca\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986541 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-config\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986632 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-proxy-ca-bundles\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:50 crc kubenswrapper[4996]: I0228 09:04:50.986663 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-client-ca\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.041550 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc" path="/var/lib/kubelet/pods/5bb5cbe6-3a05-4f58-aa17-f40b1cf486dc/volumes" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.042305 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a877e62d-9f94-47e1-9d65-c56cb661deed" path="/var/lib/kubelet/pods/a877e62d-9f94-47e1-9d65-c56cb661deed/volumes" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.088477 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7906df00-fe22-409d-8a7d-82c2f8aef6bb-client-ca\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.088545 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-config\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.088627 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-proxy-ca-bundles\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.088676 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-client-ca\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.089338 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxzkz\" (UniqueName: \"kubernetes.io/projected/8c9cf642-4804-4c65-9067-0a5311aa057a-kube-api-access-hxzkz\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.089757 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7906df00-fe22-409d-8a7d-82c2f8aef6bb-config\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.089918 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7906df00-fe22-409d-8a7d-82c2f8aef6bb-serving-cert\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.089971 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c9cf642-4804-4c65-9067-0a5311aa057a-serving-cert\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.090058 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzckr\" (UniqueName: \"kubernetes.io/projected/7906df00-fe22-409d-8a7d-82c2f8aef6bb-kube-api-access-qzckr\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.091062 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-config\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.092046 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-proxy-ca-bundles\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.092948 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8c9cf642-4804-4c65-9067-0a5311aa057a-client-ca\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.092960 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7906df00-fe22-409d-8a7d-82c2f8aef6bb-client-ca\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.093033 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7906df00-fe22-409d-8a7d-82c2f8aef6bb-config\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.100873 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7906df00-fe22-409d-8a7d-82c2f8aef6bb-serving-cert\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.115447 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c9cf642-4804-4c65-9067-0a5311aa057a-serving-cert\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.117474 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzckr\" (UniqueName: \"kubernetes.io/projected/7906df00-fe22-409d-8a7d-82c2f8aef6bb-kube-api-access-qzckr\") pod \"route-controller-manager-56886c5f9b-86c4l\" (UID: \"7906df00-fe22-409d-8a7d-82c2f8aef6bb\") " pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.118256 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxzkz\" (UniqueName: \"kubernetes.io/projected/8c9cf642-4804-4c65-9067-0a5311aa057a-kube-api-access-hxzkz\") pod \"controller-manager-585dcdcbd4-dds9m\" (UID: \"8c9cf642-4804-4c65-9067-0a5311aa057a\") " pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.209569 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.222527 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.434517 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-585dcdcbd4-dds9m"] Feb 28 09:04:51 crc kubenswrapper[4996]: I0228 09:04:51.485462 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l"] Feb 28 09:04:51 crc kubenswrapper[4996]: W0228 09:04:51.507417 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7906df00_fe22_409d_8a7d_82c2f8aef6bb.slice/crio-942fcc372f3862881672504d4d598bd57ce46e72c448d6eb6fa23416c5358b1b WatchSource:0}: Error finding container 942fcc372f3862881672504d4d598bd57ce46e72c448d6eb6fa23416c5358b1b: Status 404 returned error can't find the container with id 942fcc372f3862881672504d4d598bd57ce46e72c448d6eb6fa23416c5358b1b Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.215388 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" event={"ID":"7906df00-fe22-409d-8a7d-82c2f8aef6bb","Type":"ContainerStarted","Data":"6337ebac2da5dbe84ee5298e0bb030333d6f5094de5647225b4cc1238104a202"} Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.215440 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" event={"ID":"7906df00-fe22-409d-8a7d-82c2f8aef6bb","Type":"ContainerStarted","Data":"942fcc372f3862881672504d4d598bd57ce46e72c448d6eb6fa23416c5358b1b"} Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.216621 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.217886 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" event={"ID":"8c9cf642-4804-4c65-9067-0a5311aa057a","Type":"ContainerStarted","Data":"88a2cfb56a762c6f3202ff65661f3766e285a1cedec35dcc54bc479cb2018529"} Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.217915 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" event={"ID":"8c9cf642-4804-4c65-9067-0a5311aa057a","Type":"ContainerStarted","Data":"db4874a9153519ac889c771599ddc2be7b4c05020fcc79aa1c6ce11e3826c24f"} Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.218612 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.223126 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.224155 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.232942 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56886c5f9b-86c4l" podStartSLOduration=3.232925832 podStartE2EDuration="3.232925832s" podCreationTimestamp="2026-02-28 09:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:52.230509165 +0000 UTC m=+255.921311976" watchObservedRunningTime="2026-02-28 09:04:52.232925832 +0000 UTC m=+255.923728643" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.250464 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-585dcdcbd4-dds9m" podStartSLOduration=3.250448568 podStartE2EDuration="3.250448568s" podCreationTimestamp="2026-02-28 09:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:04:52.249989337 +0000 UTC m=+255.940792148" watchObservedRunningTime="2026-02-28 09:04:52.250448568 +0000 UTC m=+255.941251379" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.313999 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g92n"] Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.916716 4996 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.917065 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f" gracePeriod=15 Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.917203 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55" gracePeriod=15 Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.917246 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c" gracePeriod=15 Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.917277 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005" gracePeriod=15 Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.917306 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b" gracePeriod=15 Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.921864 4996 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922085 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922108 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922123 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922132 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922138 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922144 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922153 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922159 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922167 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922173 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922184 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922190 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922197 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922203 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922211 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922217 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922225 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922231 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 09:04:52 crc kubenswrapper[4996]: E0228 09:04:52.922242 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922247 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922327 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922337 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922344 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922351 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922361 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922370 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922375 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922548 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.922711 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.923471 4996 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.923858 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.932796 4996 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 28 09:04:52 crc kubenswrapper[4996]: I0228 09:04:52.953352 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.114074 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.114137 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.114438 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.114525 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.114542 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.114557 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.114594 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.114610 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.215432 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.215779 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.215844 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.215551 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.215932 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.215967 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216074 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216122 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216142 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216161 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216246 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216276 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216303 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216330 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216357 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.216383 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.223670 4996 generic.go:334] "Generic (PLEG): container finished" podID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" containerID="6449a8ba749509af653c06fc6f2f8a1051856e729fd46452d605ce2fa0a8309a" exitCode=0 Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.223758 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e84d7679-d97e-4591-a4a3-ea6e6bfae85b","Type":"ContainerDied","Data":"6449a8ba749509af653c06fc6f2f8a1051856e729fd46452d605ce2fa0a8309a"} Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.224732 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.225038 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.225309 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.226343 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.226923 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55" exitCode=0 Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.226941 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c" exitCode=0 Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.226951 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005" exitCode=0 Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.226959 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b" exitCode=2 Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.227073 4996 scope.go:117] "RemoveContainer" containerID="00a8fc338699b18516acb628ed4525a0f070900733f2f03054d194d9c49a275c" Feb 28 09:04:53 crc kubenswrapper[4996]: I0228 09:04:53.248784 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:04:53 crc kubenswrapper[4996]: W0228 09:04:53.269612 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b437a67339defd32182e0f3b340dcaf1b5dd275d5f7f15c867e6d806a92aa75e WatchSource:0}: Error finding container b437a67339defd32182e0f3b340dcaf1b5dd275d5f7f15c867e6d806a92aa75e: Status 404 returned error can't find the container with id b437a67339defd32182e0f3b340dcaf1b5dd275d5f7f15c867e6d806a92aa75e Feb 28 09:04:53 crc kubenswrapper[4996]: E0228 09:04:53.272081 4996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18985db6d8b7d251 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:04:53.271646801 +0000 UTC m=+256.962449612,LastTimestamp:2026-02-28 09:04:53.271646801 +0000 UTC m=+256.962449612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.084533 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.085356 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.086423 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.086905 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.239991 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357"} Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.240174 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b437a67339defd32182e0f3b340dcaf1b5dd275d5f7f15c867e6d806a92aa75e"} Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.241173 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.242151 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.242775 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.247669 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.680256 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.681227 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.681676 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.682427 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.841717 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kube-api-access\") pod \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.841855 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-var-lock\") pod \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.841889 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kubelet-dir\") pod \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\" (UID: \"e84d7679-d97e-4591-a4a3-ea6e6bfae85b\") " Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.842130 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e84d7679-d97e-4591-a4a3-ea6e6bfae85b" (UID: "e84d7679-d97e-4591-a4a3-ea6e6bfae85b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.842168 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-var-lock" (OuterVolumeSpecName: "var-lock") pod "e84d7679-d97e-4591-a4a3-ea6e6bfae85b" (UID: "e84d7679-d97e-4591-a4a3-ea6e6bfae85b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.849296 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e84d7679-d97e-4591-a4a3-ea6e6bfae85b" (UID: "e84d7679-d97e-4591-a4a3-ea6e6bfae85b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.943591 4996 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.943646 4996 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:54 crc kubenswrapper[4996]: I0228 09:04:54.943664 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e84d7679-d97e-4591-a4a3-ea6e6bfae85b-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:55 crc kubenswrapper[4996]: I0228 09:04:55.257062 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 09:04:55 crc kubenswrapper[4996]: I0228 09:04:55.257116 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e84d7679-d97e-4591-a4a3-ea6e6bfae85b","Type":"ContainerDied","Data":"6682e6a79f98863f076608790a9cf32bc805a7e5cd841723f0d22b302529d406"} Feb 28 09:04:55 crc kubenswrapper[4996]: I0228 09:04:55.257209 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6682e6a79f98863f076608790a9cf32bc805a7e5cd841723f0d22b302529d406" Feb 28 09:04:55 crc kubenswrapper[4996]: I0228 09:04:55.263304 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:55 crc kubenswrapper[4996]: I0228 09:04:55.264076 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:55 crc kubenswrapper[4996]: I0228 09:04:55.264898 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:56 crc kubenswrapper[4996]: E0228 09:04:56.072485 4996 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" volumeName="registry-storage" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.217590 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.219119 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.219946 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.220864 4996 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.221791 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.222221 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.266439 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.267615 4996 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f" exitCode=0 Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.267721 4996 scope.go:117] "RemoveContainer" containerID="98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.267911 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.288966 4996 scope.go:117] "RemoveContainer" containerID="3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.304556 4996 scope.go:117] "RemoveContainer" containerID="5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.325551 4996 scope.go:117] "RemoveContainer" containerID="b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.346094 4996 scope.go:117] "RemoveContainer" containerID="e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.364997 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.365214 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.365936 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.366075 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.366070 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.366197 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.366605 4996 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.366649 4996 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.366670 4996 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.369681 4996 scope.go:117] "RemoveContainer" containerID="4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.398280 4996 scope.go:117] "RemoveContainer" containerID="98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55" Feb 28 09:04:56 crc kubenswrapper[4996]: E0228 09:04:56.399703 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\": container with ID starting with 98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55 not found: ID does not exist" containerID="98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.399740 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55"} err="failed to get container status \"98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\": rpc error: code = NotFound desc = could not find container \"98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55\": container with ID starting with 98184efdbb196d30826027f75d9f59a5b5f4ebc84cdac462815fa551ab2e5b55 not found: ID does not exist" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.399764 4996 scope.go:117] "RemoveContainer" containerID="3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c" Feb 28 09:04:56 crc kubenswrapper[4996]: E0228 09:04:56.400311 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\": container with ID starting with 3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c not found: ID does not exist" containerID="3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.400533 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c"} err="failed to get container status \"3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\": rpc error: code = NotFound desc = could not find container \"3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c\": container with ID starting with 3f321c80da99f7b1c309c0f273ea40eb45bc5c82957391afc7637e163b52e80c not found: ID does not exist" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.400561 4996 scope.go:117] "RemoveContainer" containerID="5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005" Feb 28 09:04:56 crc kubenswrapper[4996]: E0228 09:04:56.400917 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\": container with ID starting with 5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005 not found: ID does not exist" containerID="5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.400974 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005"} err="failed to get container status \"5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\": rpc error: code = NotFound desc = could not find container \"5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005\": container with ID starting with 5b1e25718e21413999aa13ee5af6adb822e3397d026a4e50d3f76206f40f5005 not found: ID does not exist" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.401025 4996 scope.go:117] "RemoveContainer" containerID="b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b" Feb 28 09:04:56 crc kubenswrapper[4996]: E0228 09:04:56.401537 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\": container with ID starting with b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b not found: ID does not exist" containerID="b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.401570 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b"} err="failed to get container status \"b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\": rpc error: code = NotFound desc = could not find container \"b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b\": container with ID starting with b4bb3de364cc9dde7e9a5421bb7ed4e46c9c6484570427929c8ec7dbb6b79b8b not found: ID does not exist" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.401585 4996 scope.go:117] "RemoveContainer" containerID="e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f" Feb 28 09:04:56 crc kubenswrapper[4996]: E0228 09:04:56.401786 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\": container with ID starting with e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f not found: ID does not exist" containerID="e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.401817 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f"} err="failed to get container status \"e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\": rpc error: code = NotFound desc = could not find container \"e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f\": container with ID starting with e7d93f1e9e49150f71ec6961bf8ce011e7d1860b9088714ef8533fff5540471f not found: ID does not exist" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.401835 4996 scope.go:117] "RemoveContainer" containerID="4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058" Feb 28 09:04:56 crc kubenswrapper[4996]: E0228 09:04:56.402301 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\": container with ID starting with 4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058 not found: ID does not exist" containerID="4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.402321 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058"} err="failed to get container status \"4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\": rpc error: code = NotFound desc = could not find container \"4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058\": container with ID starting with 4e9e9f25649f7c573329c97a6b9e7e172e5aea08c642cdd9a5fa9a33a6d57058 not found: ID does not exist" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.584678 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.584937 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.585273 4996 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:56 crc kubenswrapper[4996]: I0228 09:04:56.585649 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:57 crc kubenswrapper[4996]: I0228 09:04:57.035329 4996 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:57 crc kubenswrapper[4996]: I0228 09:04:57.035611 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:57 crc kubenswrapper[4996]: I0228 09:04:57.035955 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:57 crc kubenswrapper[4996]: I0228 09:04:57.036214 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:04:57 crc kubenswrapper[4996]: I0228 09:04:57.040470 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 28 09:04:59 crc kubenswrapper[4996]: E0228 09:04:59.781796 4996 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.9:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18985db6d8b7d251 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 09:04:53.271646801 +0000 UTC m=+256.962449612,LastTimestamp:2026-02-28 09:04:53.271646801 +0000 UTC m=+256.962449612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 09:05:01 crc kubenswrapper[4996]: E0228 09:05:01.154236 4996 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:01 crc kubenswrapper[4996]: E0228 09:05:01.155090 4996 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:01 crc kubenswrapper[4996]: E0228 09:05:01.155661 4996 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:01 crc kubenswrapper[4996]: E0228 09:05:01.156346 4996 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:01 crc kubenswrapper[4996]: E0228 09:05:01.156920 4996 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:01 crc kubenswrapper[4996]: I0228 09:05:01.156971 4996 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 28 09:05:01 crc kubenswrapper[4996]: E0228 09:05:01.157416 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="200ms" Feb 28 09:05:01 crc kubenswrapper[4996]: E0228 09:05:01.358479 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="400ms" Feb 28 09:05:01 crc kubenswrapper[4996]: E0228 09:05:01.759323 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="800ms" Feb 28 09:05:02 crc kubenswrapper[4996]: E0228 09:05:02.560308 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="1.6s" Feb 28 09:05:04 crc kubenswrapper[4996]: I0228 09:05:04.033184 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:04 crc kubenswrapper[4996]: I0228 09:05:04.034915 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:04 crc kubenswrapper[4996]: I0228 09:05:04.035185 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:04 crc kubenswrapper[4996]: I0228 09:05:04.035370 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:04 crc kubenswrapper[4996]: I0228 09:05:04.058262 4996 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:04 crc kubenswrapper[4996]: I0228 09:05:04.058326 4996 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:04 crc kubenswrapper[4996]: E0228 09:05:04.058863 4996 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:04 crc kubenswrapper[4996]: I0228 09:05:04.059542 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:04 crc kubenswrapper[4996]: W0228 09:05:04.093350 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-f5640353a9baa16fd8209588d31dca2918bc6b81cf5debe028cbd484cfef61f3 WatchSource:0}: Error finding container f5640353a9baa16fd8209588d31dca2918bc6b81cf5debe028cbd484cfef61f3: Status 404 returned error can't find the container with id f5640353a9baa16fd8209588d31dca2918bc6b81cf5debe028cbd484cfef61f3 Feb 28 09:05:04 crc kubenswrapper[4996]: E0228 09:05:04.160769 4996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.9:6443: connect: connection refused" interval="3.2s" Feb 28 09:05:04 crc kubenswrapper[4996]: I0228 09:05:04.341676 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f5640353a9baa16fd8209588d31dca2918bc6b81cf5debe028cbd484cfef61f3"} Feb 28 09:05:05 crc kubenswrapper[4996]: I0228 09:05:05.353254 4996 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="65c37d591c9642d1aeeb49970c5277534910b6ca9ec16cdb5ad289a7acad7e47" exitCode=0 Feb 28 09:05:05 crc kubenswrapper[4996]: I0228 09:05:05.353325 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"65c37d591c9642d1aeeb49970c5277534910b6ca9ec16cdb5ad289a7acad7e47"} Feb 28 09:05:05 crc kubenswrapper[4996]: I0228 09:05:05.353766 4996 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:05 crc kubenswrapper[4996]: I0228 09:05:05.353805 4996 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:05 crc kubenswrapper[4996]: E0228 09:05:05.354571 4996 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:05 crc kubenswrapper[4996]: I0228 09:05:05.355279 4996 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:05 crc kubenswrapper[4996]: I0228 09:05:05.355804 4996 status_manager.go:851] "Failed to get status for pod" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:05 crc kubenswrapper[4996]: I0228 09:05:05.356464 4996 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.9:6443: connect: connection refused" Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.365996 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"463b23884cee7459f33485659c6f171268755f5930cc9d1d8d755f0d39cf5e2c"} Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.366333 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1859ce0c579c1053795cc4993671fed207011238182bdd5f223a7485621cd839"} Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.366344 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b2a451e42e5bd6f8d0d947058a5c42d7eb2540e99b089db5cc4f887d1169b900"} Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.370161 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.370588 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.370626 4996 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2" exitCode=1 Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.370647 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2"} Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.371131 4996 scope.go:117] "RemoveContainer" containerID="1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2" Feb 28 09:05:06 crc kubenswrapper[4996]: I0228 09:05:06.770114 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:05:07 crc kubenswrapper[4996]: I0228 09:05:07.379968 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8c2f102e3f38e49500b8823f96beca95f7d1bec9f5244e44f195f5e25ed066d6"} Feb 28 09:05:07 crc kubenswrapper[4996]: I0228 09:05:07.380318 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:07 crc kubenswrapper[4996]: I0228 09:05:07.380334 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ab2c9edf6bd67bd1e768555db66f526e1fff9ee7837ae7d57037a344b2c63f97"} Feb 28 09:05:07 crc kubenswrapper[4996]: I0228 09:05:07.380288 4996 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:07 crc kubenswrapper[4996]: I0228 09:05:07.381270 4996 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:07 crc kubenswrapper[4996]: I0228 09:05:07.382740 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 09:05:07 crc kubenswrapper[4996]: I0228 09:05:07.383385 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 09:05:07 crc kubenswrapper[4996]: I0228 09:05:07.383449 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c5acd3a9f4b97799599c964b7c356a4cdfe76a6a1ce2bd858556ed3629340dc2"} Feb 28 09:05:09 crc kubenswrapper[4996]: I0228 09:05:09.060727 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:09 crc kubenswrapper[4996]: I0228 09:05:09.061541 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:09 crc kubenswrapper[4996]: I0228 09:05:09.069155 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:09 crc kubenswrapper[4996]: I0228 09:05:09.391813 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.248856 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.249220 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.249293 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.250125 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.250207 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6" gracePeriod=600 Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.390440 4996 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.413855 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6" exitCode=0 Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.413925 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6"} Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.414590 4996 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.414611 4996 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.419156 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.422035 4996 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f72ea99a-57d1-4be4-afe4-6f1ba9e8f9a2" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.913222 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.913468 4996 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 28 09:05:12 crc kubenswrapper[4996]: I0228 09:05:12.913577 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 28 09:05:13 crc kubenswrapper[4996]: I0228 09:05:13.423094 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"346752807d9a3626d399fabf78210641bf6ab96ef710b50bafdc570ef4223171"} Feb 28 09:05:13 crc kubenswrapper[4996]: I0228 09:05:13.423133 4996 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:13 crc kubenswrapper[4996]: I0228 09:05:13.423154 4996 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="10a0fde0-f278-4f9f-ae6f-3a036e85a6eb" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.073040 4996 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f72ea99a-57d1-4be4-afe4-6f1ba9e8f9a2" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.345562 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" podUID="2d5c455e-6954-4ad7-994d-a73049de9b62" containerName="oauth-openshift" containerID="cri-o://22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9" gracePeriod=15 Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.907607 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.979411 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-ocp-branding-template\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.979465 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-service-ca\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.979502 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-trusted-ca-bundle\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.979537 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-session\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.979564 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-error\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.979604 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-login\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.979636 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-provider-selection\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.979672 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-router-certs\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.980689 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.980788 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981061 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-policies\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981094 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhjmw\" (UniqueName: \"kubernetes.io/projected/2d5c455e-6954-4ad7-994d-a73049de9b62-kube-api-access-dhjmw\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981115 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-serving-cert\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981133 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-cliconfig\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981675 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-dir\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981698 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-idp-0-file-data\") pod \"2d5c455e-6954-4ad7-994d-a73049de9b62\" (UID: \"2d5c455e-6954-4ad7-994d-a73049de9b62\") " Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981916 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981930 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981922 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.981989 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.982486 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.986860 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5c455e-6954-4ad7-994d-a73049de9b62-kube-api-access-dhjmw" (OuterVolumeSpecName: "kube-api-access-dhjmw") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "kube-api-access-dhjmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.987769 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.988380 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.988822 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.989506 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.989942 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.990320 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.990686 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:05:17 crc kubenswrapper[4996]: I0228 09:05:17.991266 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2d5c455e-6954-4ad7-994d-a73049de9b62" (UID: "2d5c455e-6954-4ad7-994d-a73049de9b62"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083638 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083687 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083703 4996 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083716 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083733 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhjmw\" (UniqueName: \"kubernetes.io/projected/2d5c455e-6954-4ad7-994d-a73049de9b62-kube-api-access-dhjmw\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083745 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083759 4996 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2d5c455e-6954-4ad7-994d-a73049de9b62-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083771 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083786 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083798 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083810 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.083821 4996 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2d5c455e-6954-4ad7-994d-a73049de9b62-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.457534 4996 generic.go:334] "Generic (PLEG): container finished" podID="2d5c455e-6954-4ad7-994d-a73049de9b62" containerID="22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9" exitCode=0 Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.457622 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" event={"ID":"2d5c455e-6954-4ad7-994d-a73049de9b62","Type":"ContainerDied","Data":"22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9"} Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.457893 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" event={"ID":"2d5c455e-6954-4ad7-994d-a73049de9b62","Type":"ContainerDied","Data":"a3609da8ef54d90cc61ba7383f5ecf4ed09cde4b873edb4271eb99a8644963b4"} Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.457923 4996 scope.go:117] "RemoveContainer" containerID="22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.457746 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4g92n" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.492269 4996 scope.go:117] "RemoveContainer" containerID="22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9" Feb 28 09:05:18 crc kubenswrapper[4996]: E0228 09:05:18.492878 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9\": container with ID starting with 22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9 not found: ID does not exist" containerID="22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9" Feb 28 09:05:18 crc kubenswrapper[4996]: I0228 09:05:18.492936 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9"} err="failed to get container status \"22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9\": rpc error: code = NotFound desc = could not find container \"22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9\": container with ID starting with 22711eb4e66c51c6affbfe7173e9eaa0e79eef4f638fef93f32b590b749759d9 not found: ID does not exist" Feb 28 09:05:19 crc kubenswrapper[4996]: I0228 09:05:19.702622 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 28 09:05:19 crc kubenswrapper[4996]: I0228 09:05:19.797728 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 28 09:05:20 crc kubenswrapper[4996]: I0228 09:05:20.620064 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 28 09:05:21 crc kubenswrapper[4996]: I0228 09:05:21.775605 4996 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 28 09:05:22 crc kubenswrapper[4996]: I0228 09:05:22.473419 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 28 09:05:22 crc kubenswrapper[4996]: I0228 09:05:22.912702 4996 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 28 09:05:22 crc kubenswrapper[4996]: I0228 09:05:22.912795 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 28 09:05:22 crc kubenswrapper[4996]: I0228 09:05:22.940559 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 28 09:05:22 crc kubenswrapper[4996]: I0228 09:05:22.942579 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 28 09:05:23 crc kubenswrapper[4996]: I0228 09:05:23.676696 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 28 09:05:23 crc kubenswrapper[4996]: I0228 09:05:23.737917 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 28 09:05:24 crc kubenswrapper[4996]: I0228 09:05:24.262175 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 28 09:05:24 crc kubenswrapper[4996]: I0228 09:05:24.299409 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 28 09:05:24 crc kubenswrapper[4996]: I0228 09:05:24.320407 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 28 09:05:24 crc kubenswrapper[4996]: I0228 09:05:24.697824 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 28 09:05:24 crc kubenswrapper[4996]: I0228 09:05:24.724383 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 09:05:24 crc kubenswrapper[4996]: I0228 09:05:24.860238 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 28 09:05:24 crc kubenswrapper[4996]: I0228 09:05:24.987150 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 28 09:05:25 crc kubenswrapper[4996]: I0228 09:05:25.113902 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 28 09:05:25 crc kubenswrapper[4996]: I0228 09:05:25.181248 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 28 09:05:25 crc kubenswrapper[4996]: I0228 09:05:25.457182 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 28 09:05:25 crc kubenswrapper[4996]: I0228 09:05:25.475705 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 28 09:05:25 crc kubenswrapper[4996]: I0228 09:05:25.624335 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 28 09:05:25 crc kubenswrapper[4996]: I0228 09:05:25.980941 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.031163 4996 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.042953 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.078485 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.093402 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.281539 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.304810 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.377888 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.406799 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.774606 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.881322 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.883333 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 28 09:05:26 crc kubenswrapper[4996]: I0228 09:05:26.915703 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.054814 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.075643 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.387294 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.434822 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.443907 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.460526 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.492921 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.553390 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.609368 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.670120 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.740124 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.754261 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.817345 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 28 09:05:27 crc kubenswrapper[4996]: I0228 09:05:27.971435 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.034703 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.079915 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.293929 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.408618 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.668607 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.765612 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.770871 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.783191 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.823877 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.835570 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.850563 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 28 09:05:28 crc kubenswrapper[4996]: I0228 09:05:28.882469 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.052612 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.106353 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.131543 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.299454 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.311849 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.328169 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.330176 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.364984 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.454049 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.514394 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.635667 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.746159 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.844639 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.895554 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.924817 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 28 09:05:29 crc kubenswrapper[4996]: I0228 09:05:29.970988 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.342479 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.356271 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.400139 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.402342 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.492475 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.554743 4996 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.662978 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.664664 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.697600 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.709100 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.726218 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.756469 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.807328 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.808606 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.842157 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.919593 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.961771 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 28 09:05:30 crc kubenswrapper[4996]: I0228 09:05:30.987463 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.024310 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.036789 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.056319 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.074620 4996 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.080578 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.104458 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.202720 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.203350 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.520858 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.575769 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.629973 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.645458 4996 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.707416 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.772169 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.941982 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.981255 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.985187 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 28 09:05:31 crc kubenswrapper[4996]: I0228 09:05:31.993972 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.058583 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.073656 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.157746 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.204557 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.289893 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.420963 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.432527 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.434788 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.436379 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.586292 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.651115 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.654981 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.775051 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.777750 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.784672 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.803773 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.842422 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.909412 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.912773 4996 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.912910 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.913075 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.914085 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"c5acd3a9f4b97799599c964b7c356a4cdfe76a6a1ce2bd858556ed3629340dc2"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Feb 28 09:05:32 crc kubenswrapper[4996]: I0228 09:05:32.914288 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://c5acd3a9f4b97799599c964b7c356a4cdfe76a6a1ce2bd858556ed3629340dc2" gracePeriod=30 Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.043297 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.145501 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.265326 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.295373 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.367492 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.585666 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.735247 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.758042 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.818318 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.852389 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 28 09:05:33 crc kubenswrapper[4996]: I0228 09:05:33.872777 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.040801 4996 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.046326 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=42.046292797 podStartE2EDuration="42.046292797s" podCreationTimestamp="2026-02-28 09:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:05:12.225572042 +0000 UTC m=+275.916374893" watchObservedRunningTime="2026-02-28 09:05:34.046292797 +0000 UTC m=+297.737095638" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.048607 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4g92n","openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.048728 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.051194 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.053690 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.056209 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.078144 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.078120392 podStartE2EDuration="22.078120392s" podCreationTimestamp="2026-02-28 09:05:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:05:34.077989799 +0000 UTC m=+297.768792670" watchObservedRunningTime="2026-02-28 09:05:34.078120392 +0000 UTC m=+297.768923233" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.113869 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.146058 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.164883 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.184224 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.203298 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.262450 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.340790 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.377243 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.396230 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.451046 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.488888 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.523446 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.557885 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.581801 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.645380 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.755181 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.764782 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.841219 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.845463 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.871892 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.880071 4996 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.880487 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357" gracePeriod=5 Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.884296 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 09:05:34 crc kubenswrapper[4996]: I0228 09:05:34.994108 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.003444 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.046456 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d5c455e-6954-4ad7-994d-a73049de9b62" path="/var/lib/kubelet/pods/2d5c455e-6954-4ad7-994d-a73049de9b62/volumes" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.128737 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.289058 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.289201 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.323764 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.377413 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.391061 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.521639 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.530251 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.566364 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.698254 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.754644 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.858814 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.886680 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.919197 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.919397 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 28 09:05:35 crc kubenswrapper[4996]: I0228 09:05:35.981660 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.031205 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.045076 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.089836 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.177961 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.394243 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.504675 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.606109 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.666714 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.676101 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.714243 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.725812 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.811525 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.903987 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.910353 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 28 09:05:36 crc kubenswrapper[4996]: I0228 09:05:36.944897 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.010531 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.131651 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.157333 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.236355 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.415425 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.440663 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.583211 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.632326 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.704936 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.788334 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.839029 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.903705 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.932858 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76b8488fcc-tvp9q"] Feb 28 09:05:37 crc kubenswrapper[4996]: E0228 09:05:37.933477 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.933632 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 09:05:37 crc kubenswrapper[4996]: E0228 09:05:37.933780 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" containerName="installer" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.933926 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" containerName="installer" Feb 28 09:05:37 crc kubenswrapper[4996]: E0228 09:05:37.934120 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5c455e-6954-4ad7-994d-a73049de9b62" containerName="oauth-openshift" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.934273 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5c455e-6954-4ad7-994d-a73049de9b62" containerName="oauth-openshift" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.934590 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.934759 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5c455e-6954-4ad7-994d-a73049de9b62" containerName="oauth-openshift" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.934907 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e84d7679-d97e-4591-a4a3-ea6e6bfae85b" containerName="installer" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.935718 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.941484 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.941685 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.942079 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.942880 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.943259 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.943496 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.943650 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.944810 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.946738 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.949861 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.958947 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.960501 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.961547 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76b8488fcc-tvp9q"] Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.972944 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.986279 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 28 09:05:37 crc kubenswrapper[4996]: I0228 09:05:37.986785 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.064796 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-login\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.065345 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.065583 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.065868 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.066145 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-audit-policies\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.066358 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ab5f116-9186-4555-a3d6-982e70920ef5-audit-dir\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.066640 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.066856 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-error\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.067149 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.067375 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.067609 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkm87\" (UniqueName: \"kubernetes.io/projected/0ab5f116-9186-4555-a3d6-982e70920ef5-kube-api-access-gkm87\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.067906 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.068174 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-session\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.068398 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.074201 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.088043 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.141411 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.145318 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169478 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169564 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169595 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-audit-policies\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169623 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ab5f116-9186-4555-a3d6-982e70920ef5-audit-dir\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169651 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169675 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-error\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169707 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169731 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169754 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkm87\" (UniqueName: \"kubernetes.io/projected/0ab5f116-9186-4555-a3d6-982e70920ef5-kube-api-access-gkm87\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169793 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169814 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-session\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169823 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0ab5f116-9186-4555-a3d6-982e70920ef5-audit-dir\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.169839 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.170037 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-login\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.170089 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.170812 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.170917 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-service-ca\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.171652 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-audit-policies\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.172070 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.175512 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.175840 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-session\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.184228 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.184873 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-login\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.185498 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.185970 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-router-certs\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.189352 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkm87\" (UniqueName: \"kubernetes.io/projected/0ab5f116-9186-4555-a3d6-982e70920ef5-kube-api-access-gkm87\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.189983 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.200661 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.201335 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0ab5f116-9186-4555-a3d6-982e70920ef5-v4-0-config-user-template-error\") pod \"oauth-openshift-76b8488fcc-tvp9q\" (UID: \"0ab5f116-9186-4555-a3d6-982e70920ef5\") " pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.229585 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.229664 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.273388 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.452827 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.490514 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.527222 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.550063 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.719962 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76b8488fcc-tvp9q"] Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.721592 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.739441 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.768546 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 28 09:05:38 crc kubenswrapper[4996]: I0228 09:05:38.951597 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.015655 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.055527 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.265577 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.348887 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.543968 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.575802 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.607368 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" event={"ID":"0ab5f116-9186-4555-a3d6-982e70920ef5","Type":"ContainerStarted","Data":"1ebfc54cb4694596f58f766cb3174e2b0be2c99f5ad1e776b646de67a974adca"} Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.607407 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" event={"ID":"0ab5f116-9186-4555-a3d6-982e70920ef5","Type":"ContainerStarted","Data":"6f9badf38dc6d0d195a01a31b816a3a776c91e3eaf5c15067cbcf8f207274683"} Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.607730 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.634425 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" podStartSLOduration=47.63439418 podStartE2EDuration="47.63439418s" podCreationTimestamp="2026-02-28 09:04:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:05:39.626557581 +0000 UTC m=+303.317360412" watchObservedRunningTime="2026-02-28 09:05:39.63439418 +0000 UTC m=+303.325197021" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.661818 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76b8488fcc-tvp9q" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.792114 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.904542 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 28 09:05:39 crc kubenswrapper[4996]: I0228 09:05:39.959184 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 09:05:40 crc kubenswrapper[4996]: E0228 09:05:40.013663 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-conmon-f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.482208 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.482509 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.600567 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.600685 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.600776 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.600886 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.600965 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.602564 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.602546 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.602646 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.602724 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.620418 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.624379 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.624468 4996 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357" exitCode=137 Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.624967 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.625529 4996 scope.go:117] "RemoveContainer" containerID="f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.683701 4996 scope.go:117] "RemoveContainer" containerID="f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357" Feb 28 09:05:40 crc kubenswrapper[4996]: E0228 09:05:40.684477 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357\": container with ID starting with f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357 not found: ID does not exist" containerID="f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.684544 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357"} err="failed to get container status \"f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357\": rpc error: code = NotFound desc = could not find container \"f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357\": container with ID starting with f168f8a4c82580f82b9b67d0cea4eb2c76d5704c8839bf5e7a4da1f6d459d357 not found: ID does not exist" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.703284 4996 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.703318 4996 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.703329 4996 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.703339 4996 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.703347 4996 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 28 09:05:40 crc kubenswrapper[4996]: I0228 09:05:40.888097 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 09:05:41 crc kubenswrapper[4996]: I0228 09:05:41.041305 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 28 09:05:41 crc kubenswrapper[4996]: I0228 09:05:41.041885 4996 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 28 09:05:41 crc kubenswrapper[4996]: I0228 09:05:41.044410 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 28 09:05:41 crc kubenswrapper[4996]: I0228 09:05:41.055423 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 09:05:41 crc kubenswrapper[4996]: I0228 09:05:41.055464 4996 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3292f3b7-28c3-458d-88fd-81b7c331a656" Feb 28 09:05:41 crc kubenswrapper[4996]: I0228 09:05:41.058998 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 09:05:41 crc kubenswrapper[4996]: I0228 09:05:41.059038 4996 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3292f3b7-28c3-458d-88fd-81b7c331a656" Feb 28 09:05:41 crc kubenswrapper[4996]: I0228 09:05:41.843089 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 28 09:06:03 crc kubenswrapper[4996]: I0228 09:06:03.784819 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 28 09:06:03 crc kubenswrapper[4996]: I0228 09:06:03.792365 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 09:06:03 crc kubenswrapper[4996]: I0228 09:06:03.793384 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 09:06:03 crc kubenswrapper[4996]: I0228 09:06:03.793468 4996 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="c5acd3a9f4b97799599c964b7c356a4cdfe76a6a1ce2bd858556ed3629340dc2" exitCode=137 Feb 28 09:06:03 crc kubenswrapper[4996]: I0228 09:06:03.793538 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"c5acd3a9f4b97799599c964b7c356a4cdfe76a6a1ce2bd858556ed3629340dc2"} Feb 28 09:06:03 crc kubenswrapper[4996]: I0228 09:06:03.793595 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fd95a048d8044c9b00a1853c79996833f71972cdb4d2f5b96d387190a46d665a"} Feb 28 09:06:03 crc kubenswrapper[4996]: I0228 09:06:03.793643 4996 scope.go:117] "RemoveContainer" containerID="1d615cb8696c6e2ae94342bc54b0a1f2996ee38ae0310507246fbae53546bba2" Feb 28 09:06:04 crc kubenswrapper[4996]: I0228 09:06:04.801180 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Feb 28 09:06:04 crc kubenswrapper[4996]: I0228 09:06:04.803803 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 09:06:09 crc kubenswrapper[4996]: I0228 09:06:09.392866 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:06:12 crc kubenswrapper[4996]: I0228 09:06:12.913196 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:06:12 crc kubenswrapper[4996]: I0228 09:06:12.918307 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:06:13 crc kubenswrapper[4996]: I0228 09:06:13.852938 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 09:06:22 crc kubenswrapper[4996]: I0228 09:06:22.784828 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537826-bhwkq"] Feb 28 09:06:22 crc kubenswrapper[4996]: I0228 09:06:22.786027 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" Feb 28 09:06:22 crc kubenswrapper[4996]: I0228 09:06:22.788187 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:06:22 crc kubenswrapper[4996]: I0228 09:06:22.788212 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:06:22 crc kubenswrapper[4996]: I0228 09:06:22.788878 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:06:22 crc kubenswrapper[4996]: I0228 09:06:22.792117 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537826-bhwkq"] Feb 28 09:06:22 crc kubenswrapper[4996]: I0228 09:06:22.980693 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qbs9\" (UniqueName: \"kubernetes.io/projected/540fba90-cd05-4d58-b37b-25c4dedaf95e-kube-api-access-4qbs9\") pod \"auto-csr-approver-29537826-bhwkq\" (UID: \"540fba90-cd05-4d58-b37b-25c4dedaf95e\") " pod="openshift-infra/auto-csr-approver-29537826-bhwkq" Feb 28 09:06:23 crc kubenswrapper[4996]: I0228 09:06:23.082066 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qbs9\" (UniqueName: \"kubernetes.io/projected/540fba90-cd05-4d58-b37b-25c4dedaf95e-kube-api-access-4qbs9\") pod \"auto-csr-approver-29537826-bhwkq\" (UID: \"540fba90-cd05-4d58-b37b-25c4dedaf95e\") " pod="openshift-infra/auto-csr-approver-29537826-bhwkq" Feb 28 09:06:23 crc kubenswrapper[4996]: I0228 09:06:23.109756 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qbs9\" (UniqueName: \"kubernetes.io/projected/540fba90-cd05-4d58-b37b-25c4dedaf95e-kube-api-access-4qbs9\") pod \"auto-csr-approver-29537826-bhwkq\" (UID: \"540fba90-cd05-4d58-b37b-25c4dedaf95e\") " pod="openshift-infra/auto-csr-approver-29537826-bhwkq" Feb 28 09:06:23 crc kubenswrapper[4996]: I0228 09:06:23.402377 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" Feb 28 09:06:23 crc kubenswrapper[4996]: I0228 09:06:23.839412 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537826-bhwkq"] Feb 28 09:06:23 crc kubenswrapper[4996]: I0228 09:06:23.910801 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" event={"ID":"540fba90-cd05-4d58-b37b-25c4dedaf95e","Type":"ContainerStarted","Data":"6a8cadd22c3d835571741b46f04c314ff4e1962ae863d3ac78104966ea2c6df5"} Feb 28 09:06:24 crc kubenswrapper[4996]: I0228 09:06:24.917707 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" event={"ID":"540fba90-cd05-4d58-b37b-25c4dedaf95e","Type":"ContainerStarted","Data":"ee5ee9fa21014fca1aa2700ae0ff95a7c43bb7b934751dec30d08f8e1013817f"} Feb 28 09:06:24 crc kubenswrapper[4996]: I0228 09:06:24.930345 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" podStartSLOduration=2.09227204 podStartE2EDuration="2.930327964s" podCreationTimestamp="2026-02-28 09:06:22 +0000 UTC" firstStartedPulling="2026-02-28 09:06:23.848779797 +0000 UTC m=+347.539582608" lastFinishedPulling="2026-02-28 09:06:24.686835721 +0000 UTC m=+348.377638532" observedRunningTime="2026-02-28 09:06:24.929511904 +0000 UTC m=+348.620314725" watchObservedRunningTime="2026-02-28 09:06:24.930327964 +0000 UTC m=+348.621130775" Feb 28 09:06:25 crc kubenswrapper[4996]: I0228 09:06:25.927796 4996 generic.go:334] "Generic (PLEG): container finished" podID="540fba90-cd05-4d58-b37b-25c4dedaf95e" containerID="ee5ee9fa21014fca1aa2700ae0ff95a7c43bb7b934751dec30d08f8e1013817f" exitCode=0 Feb 28 09:06:25 crc kubenswrapper[4996]: I0228 09:06:25.927840 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" event={"ID":"540fba90-cd05-4d58-b37b-25c4dedaf95e","Type":"ContainerDied","Data":"ee5ee9fa21014fca1aa2700ae0ff95a7c43bb7b934751dec30d08f8e1013817f"} Feb 28 09:06:27 crc kubenswrapper[4996]: I0228 09:06:27.143020 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" Feb 28 09:06:27 crc kubenswrapper[4996]: I0228 09:06:27.239085 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qbs9\" (UniqueName: \"kubernetes.io/projected/540fba90-cd05-4d58-b37b-25c4dedaf95e-kube-api-access-4qbs9\") pod \"540fba90-cd05-4d58-b37b-25c4dedaf95e\" (UID: \"540fba90-cd05-4d58-b37b-25c4dedaf95e\") " Feb 28 09:06:27 crc kubenswrapper[4996]: I0228 09:06:27.244344 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540fba90-cd05-4d58-b37b-25c4dedaf95e-kube-api-access-4qbs9" (OuterVolumeSpecName: "kube-api-access-4qbs9") pod "540fba90-cd05-4d58-b37b-25c4dedaf95e" (UID: "540fba90-cd05-4d58-b37b-25c4dedaf95e"). InnerVolumeSpecName "kube-api-access-4qbs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:06:27 crc kubenswrapper[4996]: I0228 09:06:27.339625 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qbs9\" (UniqueName: \"kubernetes.io/projected/540fba90-cd05-4d58-b37b-25c4dedaf95e-kube-api-access-4qbs9\") on node \"crc\" DevicePath \"\"" Feb 28 09:06:27 crc kubenswrapper[4996]: I0228 09:06:27.940734 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" event={"ID":"540fba90-cd05-4d58-b37b-25c4dedaf95e","Type":"ContainerDied","Data":"6a8cadd22c3d835571741b46f04c314ff4e1962ae863d3ac78104966ea2c6df5"} Feb 28 09:06:27 crc kubenswrapper[4996]: I0228 09:06:27.941097 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a8cadd22c3d835571741b46f04c314ff4e1962ae863d3ac78104966ea2c6df5" Feb 28 09:06:27 crc kubenswrapper[4996]: I0228 09:06:27.940814 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537826-bhwkq" Feb 28 09:07:12 crc kubenswrapper[4996]: I0228 09:07:12.249930 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:07:12 crc kubenswrapper[4996]: I0228 09:07:12.250660 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:07:42 crc kubenswrapper[4996]: I0228 09:07:42.249058 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:07:42 crc kubenswrapper[4996]: I0228 09:07:42.249656 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.175708 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537828-jnp7f"] Feb 28 09:08:00 crc kubenswrapper[4996]: E0228 09:08:00.176469 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540fba90-cd05-4d58-b37b-25c4dedaf95e" containerName="oc" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.176485 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="540fba90-cd05-4d58-b37b-25c4dedaf95e" containerName="oc" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.176616 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="540fba90-cd05-4d58-b37b-25c4dedaf95e" containerName="oc" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.177046 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537828-jnp7f" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.179691 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.180574 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.184279 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.231951 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537828-jnp7f"] Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.337359 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl28l\" (UniqueName: \"kubernetes.io/projected/ff4c34a9-7e1d-4afc-b670-9e0c70f0271d-kube-api-access-cl28l\") pod \"auto-csr-approver-29537828-jnp7f\" (UID: \"ff4c34a9-7e1d-4afc-b670-9e0c70f0271d\") " pod="openshift-infra/auto-csr-approver-29537828-jnp7f" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.439430 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl28l\" (UniqueName: \"kubernetes.io/projected/ff4c34a9-7e1d-4afc-b670-9e0c70f0271d-kube-api-access-cl28l\") pod \"auto-csr-approver-29537828-jnp7f\" (UID: \"ff4c34a9-7e1d-4afc-b670-9e0c70f0271d\") " pod="openshift-infra/auto-csr-approver-29537828-jnp7f" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.460471 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl28l\" (UniqueName: \"kubernetes.io/projected/ff4c34a9-7e1d-4afc-b670-9e0c70f0271d-kube-api-access-cl28l\") pod \"auto-csr-approver-29537828-jnp7f\" (UID: \"ff4c34a9-7e1d-4afc-b670-9e0c70f0271d\") " pod="openshift-infra/auto-csr-approver-29537828-jnp7f" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.504628 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537828-jnp7f" Feb 28 09:08:00 crc kubenswrapper[4996]: I0228 09:08:00.716478 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537828-jnp7f"] Feb 28 09:08:01 crc kubenswrapper[4996]: I0228 09:08:01.549268 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537828-jnp7f" event={"ID":"ff4c34a9-7e1d-4afc-b670-9e0c70f0271d","Type":"ContainerStarted","Data":"5d037de29345d0954c4173353461b577f745b3207f353ce1dcdc42c3bdde7aa5"} Feb 28 09:08:02 crc kubenswrapper[4996]: E0228 09:08:02.221058 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff4c34a9_7e1d_4afc_b670_9e0c70f0271d.slice/crio-fcf0e718e7a58ac348c08fd0f19de42ed17d7a528175690d536f90d6b2190fdb.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:08:02 crc kubenswrapper[4996]: I0228 09:08:02.568279 4996 generic.go:334] "Generic (PLEG): container finished" podID="ff4c34a9-7e1d-4afc-b670-9e0c70f0271d" containerID="fcf0e718e7a58ac348c08fd0f19de42ed17d7a528175690d536f90d6b2190fdb" exitCode=0 Feb 28 09:08:02 crc kubenswrapper[4996]: I0228 09:08:02.568381 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537828-jnp7f" event={"ID":"ff4c34a9-7e1d-4afc-b670-9e0c70f0271d","Type":"ContainerDied","Data":"fcf0e718e7a58ac348c08fd0f19de42ed17d7a528175690d536f90d6b2190fdb"} Feb 28 09:08:03 crc kubenswrapper[4996]: I0228 09:08:03.897633 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537828-jnp7f" Feb 28 09:08:03 crc kubenswrapper[4996]: I0228 09:08:03.988274 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl28l\" (UniqueName: \"kubernetes.io/projected/ff4c34a9-7e1d-4afc-b670-9e0c70f0271d-kube-api-access-cl28l\") pod \"ff4c34a9-7e1d-4afc-b670-9e0c70f0271d\" (UID: \"ff4c34a9-7e1d-4afc-b670-9e0c70f0271d\") " Feb 28 09:08:03 crc kubenswrapper[4996]: I0228 09:08:03.994767 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff4c34a9-7e1d-4afc-b670-9e0c70f0271d-kube-api-access-cl28l" (OuterVolumeSpecName: "kube-api-access-cl28l") pod "ff4c34a9-7e1d-4afc-b670-9e0c70f0271d" (UID: "ff4c34a9-7e1d-4afc-b670-9e0c70f0271d"). InnerVolumeSpecName "kube-api-access-cl28l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:08:04 crc kubenswrapper[4996]: I0228 09:08:04.090210 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl28l\" (UniqueName: \"kubernetes.io/projected/ff4c34a9-7e1d-4afc-b670-9e0c70f0271d-kube-api-access-cl28l\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:04 crc kubenswrapper[4996]: I0228 09:08:04.589373 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537828-jnp7f" event={"ID":"ff4c34a9-7e1d-4afc-b670-9e0c70f0271d","Type":"ContainerDied","Data":"5d037de29345d0954c4173353461b577f745b3207f353ce1dcdc42c3bdde7aa5"} Feb 28 09:08:04 crc kubenswrapper[4996]: I0228 09:08:04.589749 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d037de29345d0954c4173353461b577f745b3207f353ce1dcdc42c3bdde7aa5" Feb 28 09:08:04 crc kubenswrapper[4996]: I0228 09:08:04.589477 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537828-jnp7f" Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.248808 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.249507 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.249591 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.250637 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"346752807d9a3626d399fabf78210641bf6ab96ef710b50bafdc570ef4223171"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.250907 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://346752807d9a3626d399fabf78210641bf6ab96ef710b50bafdc570ef4223171" gracePeriod=600 Feb 28 09:08:12 crc kubenswrapper[4996]: E0228 09:08:12.402497 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda98c14ee_40d6_4e30_9390_154743a75c63.slice/crio-conmon-346752807d9a3626d399fabf78210641bf6ab96ef710b50bafdc570ef4223171.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda98c14ee_40d6_4e30_9390_154743a75c63.slice/crio-346752807d9a3626d399fabf78210641bf6ab96ef710b50bafdc570ef4223171.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.646803 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="346752807d9a3626d399fabf78210641bf6ab96ef710b50bafdc570ef4223171" exitCode=0 Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.646891 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"346752807d9a3626d399fabf78210641bf6ab96ef710b50bafdc570ef4223171"} Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.647219 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"e3b6da15faf8b8661d31491d68582f55b569ea0ca1baae1efe37fa713b132293"} Feb 28 09:08:12 crc kubenswrapper[4996]: I0228 09:08:12.647246 4996 scope.go:117] "RemoveContainer" containerID="0c0753d6c18f514d8fee71394d5f1065c80bdec4446d745e03d085c26b5500f6" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.628827 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5szk"] Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.630189 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f5szk" podUID="33a7d489-df52-4b28-90f9-9135da43486f" containerName="registry-server" containerID="cri-o://772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492" gracePeriod=30 Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.647430 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2899"] Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.647864 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2899" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerName="registry-server" containerID="cri-o://c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c" gracePeriod=30 Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.657851 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2b9vl"] Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.658151 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" podUID="8e1b2a41-1776-4907-b520-c7c941c17a54" containerName="marketplace-operator" containerID="cri-o://65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb" gracePeriod=30 Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.676104 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpd48"] Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.676612 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lpd48" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerName="registry-server" containerID="cri-o://224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4" gracePeriod=30 Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.680711 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltn9t"] Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.681167 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ltn9t" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="registry-server" containerID="cri-o://1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3" gracePeriod=30 Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.685418 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vtmgq"] Feb 28 09:08:13 crc kubenswrapper[4996]: E0228 09:08:13.685720 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff4c34a9-7e1d-4afc-b670-9e0c70f0271d" containerName="oc" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.685738 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff4c34a9-7e1d-4afc-b670-9e0c70f0271d" containerName="oc" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.685853 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff4c34a9-7e1d-4afc-b670-9e0c70f0271d" containerName="oc" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.687238 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.695835 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vtmgq"] Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.724129 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cee24e37-cdd8-4423-831e-8c13e1f30c37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.724456 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsg7k\" (UniqueName: \"kubernetes.io/projected/cee24e37-cdd8-4423-831e-8c13e1f30c37-kube-api-access-fsg7k\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.724490 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cee24e37-cdd8-4423-831e-8c13e1f30c37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.825610 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cee24e37-cdd8-4423-831e-8c13e1f30c37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.825729 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cee24e37-cdd8-4423-831e-8c13e1f30c37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.825769 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsg7k\" (UniqueName: \"kubernetes.io/projected/cee24e37-cdd8-4423-831e-8c13e1f30c37-kube-api-access-fsg7k\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.827481 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cee24e37-cdd8-4423-831e-8c13e1f30c37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.834761 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cee24e37-cdd8-4423-831e-8c13e1f30c37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: I0228 09:08:13.844886 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsg7k\" (UniqueName: \"kubernetes.io/projected/cee24e37-cdd8-4423-831e-8c13e1f30c37-kube-api-access-fsg7k\") pod \"marketplace-operator-79b997595-vtmgq\" (UID: \"cee24e37-cdd8-4423-831e-8c13e1f30c37\") " pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:13 crc kubenswrapper[4996]: E0228 09:08:13.945831 4996 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3 is running failed: container process not found" containerID="1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3" cmd=["grpc_health_probe","-addr=:50051"] Feb 28 09:08:13 crc kubenswrapper[4996]: E0228 09:08:13.946740 4996 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3 is running failed: container process not found" containerID="1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3" cmd=["grpc_health_probe","-addr=:50051"] Feb 28 09:08:13 crc kubenswrapper[4996]: E0228 09:08:13.947175 4996 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3 is running failed: container process not found" containerID="1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3" cmd=["grpc_health_probe","-addr=:50051"] Feb 28 09:08:13 crc kubenswrapper[4996]: E0228 09:08:13.947230 4996 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-ltn9t" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="registry-server" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.147823 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.163301 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2899" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.172068 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.191451 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.196146 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.201700 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334216 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv4vs\" (UniqueName: \"kubernetes.io/projected/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-kube-api-access-bv4vs\") pod \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334294 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-catalog-content\") pod \"50d13816-0091-4325-88fd-acac1435d7ea\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334330 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqxz\" (UniqueName: \"kubernetes.io/projected/50d13816-0091-4325-88fd-acac1435d7ea-kube-api-access-gpqxz\") pod \"50d13816-0091-4325-88fd-acac1435d7ea\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334360 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-catalog-content\") pod \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334385 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pm7h\" (UniqueName: \"kubernetes.io/projected/8e1b2a41-1776-4907-b520-c7c941c17a54-kube-api-access-2pm7h\") pod \"8e1b2a41-1776-4907-b520-c7c941c17a54\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334417 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-catalog-content\") pod \"33a7d489-df52-4b28-90f9-9135da43486f\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334456 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-trusted-ca\") pod \"8e1b2a41-1776-4907-b520-c7c941c17a54\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334492 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpx56\" (UniqueName: \"kubernetes.io/projected/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-kube-api-access-wpx56\") pod \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334529 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj748\" (UniqueName: \"kubernetes.io/projected/33a7d489-df52-4b28-90f9-9135da43486f-kube-api-access-sj748\") pod \"33a7d489-df52-4b28-90f9-9135da43486f\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.334555 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-catalog-content\") pod \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.336437 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-operator-metrics\") pod \"8e1b2a41-1776-4907-b520-c7c941c17a54\" (UID: \"8e1b2a41-1776-4907-b520-c7c941c17a54\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.336486 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-utilities\") pod \"33a7d489-df52-4b28-90f9-9135da43486f\" (UID: \"33a7d489-df52-4b28-90f9-9135da43486f\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.336566 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-utilities\") pod \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\" (UID: \"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.336587 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-utilities\") pod \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\" (UID: \"f6d1fda6-9673-42cb-b6c4-b4375f870bcb\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.336624 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-utilities\") pod \"50d13816-0091-4325-88fd-acac1435d7ea\" (UID: \"50d13816-0091-4325-88fd-acac1435d7ea\") " Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.339627 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8e1b2a41-1776-4907-b520-c7c941c17a54" (UID: "8e1b2a41-1776-4907-b520-c7c941c17a54"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.340503 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-kube-api-access-wpx56" (OuterVolumeSpecName: "kube-api-access-wpx56") pod "b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" (UID: "b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a"). InnerVolumeSpecName "kube-api-access-wpx56". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.340622 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8e1b2a41-1776-4907-b520-c7c941c17a54" (UID: "8e1b2a41-1776-4907-b520-c7c941c17a54"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.340680 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-utilities" (OuterVolumeSpecName: "utilities") pod "f6d1fda6-9673-42cb-b6c4-b4375f870bcb" (UID: "f6d1fda6-9673-42cb-b6c4-b4375f870bcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.341430 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e1b2a41-1776-4907-b520-c7c941c17a54-kube-api-access-2pm7h" (OuterVolumeSpecName: "kube-api-access-2pm7h") pod "8e1b2a41-1776-4907-b520-c7c941c17a54" (UID: "8e1b2a41-1776-4907-b520-c7c941c17a54"). InnerVolumeSpecName "kube-api-access-2pm7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.341935 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-utilities" (OuterVolumeSpecName: "utilities") pod "33a7d489-df52-4b28-90f9-9135da43486f" (UID: "33a7d489-df52-4b28-90f9-9135da43486f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.342028 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-utilities" (OuterVolumeSpecName: "utilities") pod "50d13816-0091-4325-88fd-acac1435d7ea" (UID: "50d13816-0091-4325-88fd-acac1435d7ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.342077 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-kube-api-access-bv4vs" (OuterVolumeSpecName: "kube-api-access-bv4vs") pod "f6d1fda6-9673-42cb-b6c4-b4375f870bcb" (UID: "f6d1fda6-9673-42cb-b6c4-b4375f870bcb"). InnerVolumeSpecName "kube-api-access-bv4vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.342171 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-utilities" (OuterVolumeSpecName: "utilities") pod "b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" (UID: "b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.359465 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33a7d489-df52-4b28-90f9-9135da43486f-kube-api-access-sj748" (OuterVolumeSpecName: "kube-api-access-sj748") pod "33a7d489-df52-4b28-90f9-9135da43486f" (UID: "33a7d489-df52-4b28-90f9-9135da43486f"). InnerVolumeSpecName "kube-api-access-sj748". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.359785 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d13816-0091-4325-88fd-acac1435d7ea-kube-api-access-gpqxz" (OuterVolumeSpecName: "kube-api-access-gpqxz") pod "50d13816-0091-4325-88fd-acac1435d7ea" (UID: "50d13816-0091-4325-88fd-acac1435d7ea"). InnerVolumeSpecName "kube-api-access-gpqxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.379737 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vtmgq"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.393895 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6d1fda6-9673-42cb-b6c4-b4375f870bcb" (UID: "f6d1fda6-9673-42cb-b6c4-b4375f870bcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.410510 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33a7d489-df52-4b28-90f9-9135da43486f" (UID: "33a7d489-df52-4b28-90f9-9135da43486f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.414571 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" (UID: "b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438166 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438484 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv4vs\" (UniqueName: \"kubernetes.io/projected/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-kube-api-access-bv4vs\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438501 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpqxz\" (UniqueName: \"kubernetes.io/projected/50d13816-0091-4325-88fd-acac1435d7ea-kube-api-access-gpqxz\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438512 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pm7h\" (UniqueName: \"kubernetes.io/projected/8e1b2a41-1776-4907-b520-c7c941c17a54-kube-api-access-2pm7h\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438524 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438535 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438546 4996 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438558 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpx56\" (UniqueName: \"kubernetes.io/projected/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-kube-api-access-wpx56\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438570 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj748\" (UniqueName: \"kubernetes.io/projected/33a7d489-df52-4b28-90f9-9135da43486f-kube-api-access-sj748\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438595 4996 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8e1b2a41-1776-4907-b520-c7c941c17a54-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438609 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438619 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33a7d489-df52-4b28-90f9-9135da43486f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438630 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.438640 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6d1fda6-9673-42cb-b6c4-b4375f870bcb-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.504855 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50d13816-0091-4325-88fd-acac1435d7ea" (UID: "50d13816-0091-4325-88fd-acac1435d7ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.539960 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50d13816-0091-4325-88fd-acac1435d7ea-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.675582 4996 generic.go:334] "Generic (PLEG): container finished" podID="50d13816-0091-4325-88fd-acac1435d7ea" containerID="1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3" exitCode=0 Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.675663 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltn9t" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.675695 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltn9t" event={"ID":"50d13816-0091-4325-88fd-acac1435d7ea","Type":"ContainerDied","Data":"1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.675734 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltn9t" event={"ID":"50d13816-0091-4325-88fd-acac1435d7ea","Type":"ContainerDied","Data":"727b001dee271b23add17d611615359e0bdd495475bfc564b5c8d00394db2ef6"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.675761 4996 scope.go:117] "RemoveContainer" containerID="1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.677172 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" event={"ID":"cee24e37-cdd8-4423-831e-8c13e1f30c37","Type":"ContainerStarted","Data":"900d23badb438ac5802a9d8b8c9f622daf036d016d9dc75586c6c005fdae1853"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.677206 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" event={"ID":"cee24e37-cdd8-4423-831e-8c13e1f30c37","Type":"ContainerStarted","Data":"a80592a999c745ffe037f1eee621fd257a8457fe4bbe1075c3ea928785942111"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.677635 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.679404 4996 generic.go:334] "Generic (PLEG): container finished" podID="8e1b2a41-1776-4907-b520-c7c941c17a54" containerID="65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb" exitCode=0 Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.679471 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.679493 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" event={"ID":"8e1b2a41-1776-4907-b520-c7c941c17a54","Type":"ContainerDied","Data":"65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.679529 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-2b9vl" event={"ID":"8e1b2a41-1776-4907-b520-c7c941c17a54","Type":"ContainerDied","Data":"1c7fe2c3b06c10b82b0f7d62ff3e154a407aebd82b2b22099848ec29131bf7cf"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.679628 4996 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vtmgq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" start-of-body= Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.679715 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" podUID="cee24e37-cdd8-4423-831e-8c13e1f30c37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": dial tcp 10.217.0.70:8080: connect: connection refused" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.706849 4996 scope.go:117] "RemoveContainer" containerID="9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.708990 4996 generic.go:334] "Generic (PLEG): container finished" podID="33a7d489-df52-4b28-90f9-9135da43486f" containerID="772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492" exitCode=0 Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.709154 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5szk" event={"ID":"33a7d489-df52-4b28-90f9-9135da43486f","Type":"ContainerDied","Data":"772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.709181 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f5szk" event={"ID":"33a7d489-df52-4b28-90f9-9135da43486f","Type":"ContainerDied","Data":"5cf980efbcd8368f398c3990b1bbff9811d38a0094ce869c46769b8f604e2bdf"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.709250 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f5szk" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.709623 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" podStartSLOduration=1.709602393 podStartE2EDuration="1.709602393s" podCreationTimestamp="2026-02-28 09:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:08:14.70705106 +0000 UTC m=+458.397853901" watchObservedRunningTime="2026-02-28 09:08:14.709602393 +0000 UTC m=+458.400405204" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.713183 4996 generic.go:334] "Generic (PLEG): container finished" podID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerID="c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c" exitCode=0 Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.713272 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2899" event={"ID":"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a","Type":"ContainerDied","Data":"c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.713307 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2899" event={"ID":"b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a","Type":"ContainerDied","Data":"9915957ef916b4f832a47bd6c915bca5d96e839bd61c9278b20793e6ec690ab6"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.713340 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2899" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.725217 4996 generic.go:334] "Generic (PLEG): container finished" podID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerID="224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4" exitCode=0 Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.725278 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpd48" event={"ID":"f6d1fda6-9673-42cb-b6c4-b4375f870bcb","Type":"ContainerDied","Data":"224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.725310 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lpd48" event={"ID":"f6d1fda6-9673-42cb-b6c4-b4375f870bcb","Type":"ContainerDied","Data":"604a26a9d4fba1c0665c9eff9a91b4568d9c9f9ee81df9094be2018588eabfa7"} Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.725362 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lpd48" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.727412 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltn9t"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.730773 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ltn9t"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.730988 4996 scope.go:117] "RemoveContainer" containerID="1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.749135 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2b9vl"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.752653 4996 scope.go:117] "RemoveContainer" containerID="1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.753152 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-2b9vl"] Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.753368 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3\": container with ID starting with 1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3 not found: ID does not exist" containerID="1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.753406 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3"} err="failed to get container status \"1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3\": rpc error: code = NotFound desc = could not find container \"1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3\": container with ID starting with 1e4246193e57b8c90a3c5692204e78d5ece9f284afa6b54b746409d08e4ba1e3 not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.753435 4996 scope.go:117] "RemoveContainer" containerID="9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.753751 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d\": container with ID starting with 9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d not found: ID does not exist" containerID="9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.753794 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d"} err="failed to get container status \"9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d\": rpc error: code = NotFound desc = could not find container \"9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d\": container with ID starting with 9b2a9506a5ed2c7bf921589f7f380f7cbf7eabed6e4e9cd385464f4f8b1ef20d not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.753826 4996 scope.go:117] "RemoveContainer" containerID="1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.754583 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776\": container with ID starting with 1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776 not found: ID does not exist" containerID="1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.754606 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776"} err="failed to get container status \"1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776\": rpc error: code = NotFound desc = could not find container \"1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776\": container with ID starting with 1265f8982baeb9e0a2798aad20522a59a2c2a404454d83c44376ab6ad5ecf776 not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.754624 4996 scope.go:117] "RemoveContainer" containerID="65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.793126 4996 scope.go:117] "RemoveContainer" containerID="65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.793547 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb\": container with ID starting with 65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb not found: ID does not exist" containerID="65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.793584 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb"} err="failed to get container status \"65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb\": rpc error: code = NotFound desc = could not find container \"65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb\": container with ID starting with 65f67b511cd1aae79866a9a052acc8b77ec579dbf52c6e264a12f7f5744f79bb not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.793611 4996 scope.go:117] "RemoveContainer" containerID="772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.802394 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f5szk"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.812274 4996 scope.go:117] "RemoveContainer" containerID="9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.819359 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f5szk"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.828165 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2899"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.833419 4996 scope.go:117] "RemoveContainer" containerID="1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.834233 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2899"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.842717 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpd48"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.847453 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lpd48"] Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.851440 4996 scope.go:117] "RemoveContainer" containerID="772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.852161 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492\": container with ID starting with 772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492 not found: ID does not exist" containerID="772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.852194 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492"} err="failed to get container status \"772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492\": rpc error: code = NotFound desc = could not find container \"772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492\": container with ID starting with 772529f6c0c4432325554e514762759c589be64a0c15c55d11a90041c823e492 not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.852222 4996 scope.go:117] "RemoveContainer" containerID="9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.852659 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0\": container with ID starting with 9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0 not found: ID does not exist" containerID="9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.852699 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0"} err="failed to get container status \"9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0\": rpc error: code = NotFound desc = could not find container \"9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0\": container with ID starting with 9e57b720ab8504b5070ad19f8f8018096c4c887d9808810816b8786e02c066b0 not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.852728 4996 scope.go:117] "RemoveContainer" containerID="1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.853412 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f\": container with ID starting with 1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f not found: ID does not exist" containerID="1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.853452 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f"} err="failed to get container status \"1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f\": rpc error: code = NotFound desc = could not find container \"1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f\": container with ID starting with 1354c20c99bbff74063781a762b9eb6d8ec8b252c118206f0b9e11453932624f not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.853482 4996 scope.go:117] "RemoveContainer" containerID="c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.871318 4996 scope.go:117] "RemoveContainer" containerID="bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.890117 4996 scope.go:117] "RemoveContainer" containerID="f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.912571 4996 scope.go:117] "RemoveContainer" containerID="c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.913282 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c\": container with ID starting with c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c not found: ID does not exist" containerID="c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.913374 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c"} err="failed to get container status \"c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c\": rpc error: code = NotFound desc = could not find container \"c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c\": container with ID starting with c2bf982ea7c152e50437ba0b2f535def2b4462fd16079e98aaef9de8528a170c not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.913449 4996 scope.go:117] "RemoveContainer" containerID="bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.914185 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389\": container with ID starting with bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389 not found: ID does not exist" containerID="bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.914228 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389"} err="failed to get container status \"bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389\": rpc error: code = NotFound desc = could not find container \"bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389\": container with ID starting with bf124f658fd429567f44c576bfa1fdebd725cc79ec9193b1739d175e6ef2e389 not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.914258 4996 scope.go:117] "RemoveContainer" containerID="f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.914582 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b\": container with ID starting with f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b not found: ID does not exist" containerID="f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.914626 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b"} err="failed to get container status \"f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b\": rpc error: code = NotFound desc = could not find container \"f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b\": container with ID starting with f2145adb0b19f243bb0f3a55c5422522a17171b37ce795d31cf453d128de9c3b not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.914657 4996 scope.go:117] "RemoveContainer" containerID="224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.928628 4996 scope.go:117] "RemoveContainer" containerID="b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.949586 4996 scope.go:117] "RemoveContainer" containerID="0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.967772 4996 scope.go:117] "RemoveContainer" containerID="224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.968456 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4\": container with ID starting with 224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4 not found: ID does not exist" containerID="224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.968518 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4"} err="failed to get container status \"224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4\": rpc error: code = NotFound desc = could not find container \"224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4\": container with ID starting with 224fa94f358c03d75fdb68542789077c9d62fb5e6d8b09a641741056fd574ed4 not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.968559 4996 scope.go:117] "RemoveContainer" containerID="b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.969088 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2\": container with ID starting with b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2 not found: ID does not exist" containerID="b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.969121 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2"} err="failed to get container status \"b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2\": rpc error: code = NotFound desc = could not find container \"b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2\": container with ID starting with b67a6657b208b1959a6a99ace7921f0d5dae53aa6d631208eb489fcc0d7b8ce2 not found: ID does not exist" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.969146 4996 scope.go:117] "RemoveContainer" containerID="0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0" Feb 28 09:08:14 crc kubenswrapper[4996]: E0228 09:08:14.969515 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0\": container with ID starting with 0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0 not found: ID does not exist" containerID="0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0" Feb 28 09:08:14 crc kubenswrapper[4996]: I0228 09:08:14.969542 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0"} err="failed to get container status \"0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0\": rpc error: code = NotFound desc = could not find container \"0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0\": container with ID starting with 0eab2d339701ea7f7dff472a9ef868fcac6a3d26cdcee5f43d57f76c9b124ed0 not found: ID does not exist" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.041182 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33a7d489-df52-4b28-90f9-9135da43486f" path="/var/lib/kubelet/pods/33a7d489-df52-4b28-90f9-9135da43486f/volumes" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.041975 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d13816-0091-4325-88fd-acac1435d7ea" path="/var/lib/kubelet/pods/50d13816-0091-4325-88fd-acac1435d7ea/volumes" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.042755 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e1b2a41-1776-4907-b520-c7c941c17a54" path="/var/lib/kubelet/pods/8e1b2a41-1776-4907-b520-c7c941c17a54/volumes" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.043859 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" path="/var/lib/kubelet/pods/b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a/volumes" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.044608 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" path="/var/lib/kubelet/pods/f6d1fda6-9673-42cb-b6c4-b4375f870bcb/volumes" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.742241 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vtmgq" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852443 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b4wzn"] Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852693 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852708 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852719 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a7d489-df52-4b28-90f9-9135da43486f" containerName="extract-content" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852726 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a7d489-df52-4b28-90f9-9135da43486f" containerName="extract-content" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852739 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852746 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852759 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerName="extract-utilities" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852766 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerName="extract-utilities" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852777 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a7d489-df52-4b28-90f9-9135da43486f" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852785 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a7d489-df52-4b28-90f9-9135da43486f" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852799 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852806 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852817 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerName="extract-content" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852823 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerName="extract-content" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852834 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerName="extract-content" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852841 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerName="extract-content" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852851 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33a7d489-df52-4b28-90f9-9135da43486f" containerName="extract-utilities" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852858 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="33a7d489-df52-4b28-90f9-9135da43486f" containerName="extract-utilities" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852871 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="extract-content" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852878 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="extract-content" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852888 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1b2a41-1776-4907-b520-c7c941c17a54" containerName="marketplace-operator" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852896 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1b2a41-1776-4907-b520-c7c941c17a54" containerName="marketplace-operator" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852909 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerName="extract-utilities" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852916 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerName="extract-utilities" Feb 28 09:08:15 crc kubenswrapper[4996]: E0228 09:08:15.852926 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="extract-utilities" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.852933 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="extract-utilities" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.853049 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b13a5c4e-ce50-4a84-8ef1-e63d18dfd06a" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.853063 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1b2a41-1776-4907-b520-c7c941c17a54" containerName="marketplace-operator" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.853077 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d13816-0091-4325-88fd-acac1435d7ea" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.853087 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="33a7d489-df52-4b28-90f9-9135da43486f" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.853097 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6d1fda6-9673-42cb-b6c4-b4375f870bcb" containerName="registry-server" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.855657 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.857295 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhjvh\" (UniqueName: \"kubernetes.io/projected/6fb55139-32ca-412a-b14e-5a026e75bd03-kube-api-access-vhjvh\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.857491 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb55139-32ca-412a-b14e-5a026e75bd03-utilities\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.857567 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb55139-32ca-412a-b14e-5a026e75bd03-catalog-content\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.858946 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.861476 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4wzn"] Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.958239 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhjvh\" (UniqueName: \"kubernetes.io/projected/6fb55139-32ca-412a-b14e-5a026e75bd03-kube-api-access-vhjvh\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.958318 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb55139-32ca-412a-b14e-5a026e75bd03-utilities\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.958344 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb55139-32ca-412a-b14e-5a026e75bd03-catalog-content\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.958773 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fb55139-32ca-412a-b14e-5a026e75bd03-catalog-content\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.959001 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fb55139-32ca-412a-b14e-5a026e75bd03-utilities\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:15 crc kubenswrapper[4996]: I0228 09:08:15.981843 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhjvh\" (UniqueName: \"kubernetes.io/projected/6fb55139-32ca-412a-b14e-5a026e75bd03-kube-api-access-vhjvh\") pod \"redhat-marketplace-b4wzn\" (UID: \"6fb55139-32ca-412a-b14e-5a026e75bd03\") " pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.052332 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hmjwf"] Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.053967 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.056515 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.060098 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-427df\" (UniqueName: \"kubernetes.io/projected/75473fc1-d880-4706-b4ff-9c95431be795-kube-api-access-427df\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.060386 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75473fc1-d880-4706-b4ff-9c95431be795-utilities\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.060541 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75473fc1-d880-4706-b4ff-9c95431be795-catalog-content\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.070954 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmjwf"] Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.161443 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-427df\" (UniqueName: \"kubernetes.io/projected/75473fc1-d880-4706-b4ff-9c95431be795-kube-api-access-427df\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.161543 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75473fc1-d880-4706-b4ff-9c95431be795-utilities\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.161581 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75473fc1-d880-4706-b4ff-9c95431be795-catalog-content\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.162143 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75473fc1-d880-4706-b4ff-9c95431be795-catalog-content\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.162237 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75473fc1-d880-4706-b4ff-9c95431be795-utilities\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.175108 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.189643 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-427df\" (UniqueName: \"kubernetes.io/projected/75473fc1-d880-4706-b4ff-9c95431be795-kube-api-access-427df\") pod \"redhat-operators-hmjwf\" (UID: \"75473fc1-d880-4706-b4ff-9c95431be795\") " pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.381772 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:16 crc kubenswrapper[4996]: I0228 09:08:16.435637 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b4wzn"] Feb 28 09:08:16 crc kubenswrapper[4996]: W0228 09:08:16.446995 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fb55139_32ca_412a_b14e_5a026e75bd03.slice/crio-614247ca8fca5bbd42f3222d0215c8ca4780ac06ca0754f033be4b91c5a6f439 WatchSource:0}: Error finding container 614247ca8fca5bbd42f3222d0215c8ca4780ac06ca0754f033be4b91c5a6f439: Status 404 returned error can't find the container with id 614247ca8fca5bbd42f3222d0215c8ca4780ac06ca0754f033be4b91c5a6f439 Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:16.749581 4996 generic.go:334] "Generic (PLEG): container finished" podID="6fb55139-32ca-412a-b14e-5a026e75bd03" containerID="3aad11c0276310ef08e400886a49b6a7416684ae5c4286af81168faa75eaf603" exitCode=0 Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:16.749672 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4wzn" event={"ID":"6fb55139-32ca-412a-b14e-5a026e75bd03","Type":"ContainerDied","Data":"3aad11c0276310ef08e400886a49b6a7416684ae5c4286af81168faa75eaf603"} Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:16.749937 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4wzn" event={"ID":"6fb55139-32ca-412a-b14e-5a026e75bd03","Type":"ContainerStarted","Data":"614247ca8fca5bbd42f3222d0215c8ca4780ac06ca0754f033be4b91c5a6f439"} Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:17.268189 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmjwf"] Feb 28 09:08:17 crc kubenswrapper[4996]: W0228 09:08:17.277061 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75473fc1_d880_4706_b4ff_9c95431be795.slice/crio-cd807bb2ddc6660ae0fda19137bd31149d5f075873f152d6b3ca08edb449d45b WatchSource:0}: Error finding container cd807bb2ddc6660ae0fda19137bd31149d5f075873f152d6b3ca08edb449d45b: Status 404 returned error can't find the container with id cd807bb2ddc6660ae0fda19137bd31149d5f075873f152d6b3ca08edb449d45b Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:17.755952 4996 generic.go:334] "Generic (PLEG): container finished" podID="6fb55139-32ca-412a-b14e-5a026e75bd03" containerID="ce6cef9bb0ab591977e134d065c629e87fd581d20b55c7b8c56e248f600b2e65" exitCode=0 Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:17.756027 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4wzn" event={"ID":"6fb55139-32ca-412a-b14e-5a026e75bd03","Type":"ContainerDied","Data":"ce6cef9bb0ab591977e134d065c629e87fd581d20b55c7b8c56e248f600b2e65"} Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:17.759862 4996 generic.go:334] "Generic (PLEG): container finished" podID="75473fc1-d880-4706-b4ff-9c95431be795" containerID="2807814fcd2a2ba088ce65fcc61fa8a87bd7eaa8e79f97980a58849a1d10bfa5" exitCode=0 Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:17.759928 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmjwf" event={"ID":"75473fc1-d880-4706-b4ff-9c95431be795","Type":"ContainerDied","Data":"2807814fcd2a2ba088ce65fcc61fa8a87bd7eaa8e79f97980a58849a1d10bfa5"} Feb 28 09:08:17 crc kubenswrapper[4996]: I0228 09:08:17.759970 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmjwf" event={"ID":"75473fc1-d880-4706-b4ff-9c95431be795","Type":"ContainerStarted","Data":"cd807bb2ddc6660ae0fda19137bd31149d5f075873f152d6b3ca08edb449d45b"} Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.261132 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lw467"] Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.262466 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.266945 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.276323 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lw467"] Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.388873 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa038159-c228-4d5d-bf86-18fa4e8c489d-catalog-content\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.389276 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpqww\" (UniqueName: \"kubernetes.io/projected/aa038159-c228-4d5d-bf86-18fa4e8c489d-kube-api-access-rpqww\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.389313 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa038159-c228-4d5d-bf86-18fa4e8c489d-utilities\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.451363 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c7gl7"] Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.452823 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.455511 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.463725 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7gl7"] Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.490113 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa038159-c228-4d5d-bf86-18fa4e8c489d-utilities\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.490220 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa038159-c228-4d5d-bf86-18fa4e8c489d-catalog-content\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.490262 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpqww\" (UniqueName: \"kubernetes.io/projected/aa038159-c228-4d5d-bf86-18fa4e8c489d-kube-api-access-rpqww\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.490908 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa038159-c228-4d5d-bf86-18fa4e8c489d-catalog-content\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.491317 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa038159-c228-4d5d-bf86-18fa4e8c489d-utilities\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.514567 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpqww\" (UniqueName: \"kubernetes.io/projected/aa038159-c228-4d5d-bf86-18fa4e8c489d-kube-api-access-rpqww\") pod \"community-operators-lw467\" (UID: \"aa038159-c228-4d5d-bf86-18fa4e8c489d\") " pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.583509 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.590831 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bffe615-107e-43bb-a1c6-abcb3684ecc5-catalog-content\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.590913 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zwdk\" (UniqueName: \"kubernetes.io/projected/7bffe615-107e-43bb-a1c6-abcb3684ecc5-kube-api-access-7zwdk\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.590965 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bffe615-107e-43bb-a1c6-abcb3684ecc5-utilities\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.692852 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bffe615-107e-43bb-a1c6-abcb3684ecc5-utilities\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.693261 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bffe615-107e-43bb-a1c6-abcb3684ecc5-catalog-content\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.693323 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zwdk\" (UniqueName: \"kubernetes.io/projected/7bffe615-107e-43bb-a1c6-abcb3684ecc5-kube-api-access-7zwdk\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.693388 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bffe615-107e-43bb-a1c6-abcb3684ecc5-utilities\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.693603 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bffe615-107e-43bb-a1c6-abcb3684ecc5-catalog-content\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.710566 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zwdk\" (UniqueName: \"kubernetes.io/projected/7bffe615-107e-43bb-a1c6-abcb3684ecc5-kube-api-access-7zwdk\") pod \"certified-operators-c7gl7\" (UID: \"7bffe615-107e-43bb-a1c6-abcb3684ecc5\") " pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.769281 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmjwf" event={"ID":"75473fc1-d880-4706-b4ff-9c95431be795","Type":"ContainerStarted","Data":"d8391091c1f43a10beffb3f6d542e6f061da68bc95f8faf1792ffb3330290027"} Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.773239 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b4wzn" event={"ID":"6fb55139-32ca-412a-b14e-5a026e75bd03","Type":"ContainerStarted","Data":"a0ad9339217d7f1defa6e6959cb9c2264802e4b8c79d84683abc69dbec8e27d9"} Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.797544 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.805509 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b4wzn" podStartSLOduration=2.406492494 podStartE2EDuration="3.805491613s" podCreationTimestamp="2026-02-28 09:08:15 +0000 UTC" firstStartedPulling="2026-02-28 09:08:16.752209192 +0000 UTC m=+460.443012003" lastFinishedPulling="2026-02-28 09:08:18.151208311 +0000 UTC m=+461.842011122" observedRunningTime="2026-02-28 09:08:18.801608318 +0000 UTC m=+462.492411139" watchObservedRunningTime="2026-02-28 09:08:18.805491613 +0000 UTC m=+462.496294424" Feb 28 09:08:18 crc kubenswrapper[4996]: I0228 09:08:18.980796 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lw467"] Feb 28 09:08:18 crc kubenswrapper[4996]: W0228 09:08:18.987270 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa038159_c228_4d5d_bf86_18fa4e8c489d.slice/crio-ae4eb4555bfd2d46f7050cbd7b4613bed3286346a29c9e559228a93ff58c2bfd WatchSource:0}: Error finding container ae4eb4555bfd2d46f7050cbd7b4613bed3286346a29c9e559228a93ff58c2bfd: Status 404 returned error can't find the container with id ae4eb4555bfd2d46f7050cbd7b4613bed3286346a29c9e559228a93ff58c2bfd Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.007076 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c7gl7"] Feb 28 09:08:19 crc kubenswrapper[4996]: W0228 09:08:19.015606 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bffe615_107e_43bb_a1c6_abcb3684ecc5.slice/crio-163da85caee727ea769ed7de7937bec167b57d55c6c0655d9647a59a8a62d617 WatchSource:0}: Error finding container 163da85caee727ea769ed7de7937bec167b57d55c6c0655d9647a59a8a62d617: Status 404 returned error can't find the container with id 163da85caee727ea769ed7de7937bec167b57d55c6c0655d9647a59a8a62d617 Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.781333 4996 generic.go:334] "Generic (PLEG): container finished" podID="75473fc1-d880-4706-b4ff-9c95431be795" containerID="d8391091c1f43a10beffb3f6d542e6f061da68bc95f8faf1792ffb3330290027" exitCode=0 Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.781394 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmjwf" event={"ID":"75473fc1-d880-4706-b4ff-9c95431be795","Type":"ContainerDied","Data":"d8391091c1f43a10beffb3f6d542e6f061da68bc95f8faf1792ffb3330290027"} Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.794297 4996 generic.go:334] "Generic (PLEG): container finished" podID="7bffe615-107e-43bb-a1c6-abcb3684ecc5" containerID="0a86b93c0e41d8b4c78ce358a745ceb84fd59ab61c619494e7de4f84778881b8" exitCode=0 Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.795541 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7gl7" event={"ID":"7bffe615-107e-43bb-a1c6-abcb3684ecc5","Type":"ContainerDied","Data":"0a86b93c0e41d8b4c78ce358a745ceb84fd59ab61c619494e7de4f84778881b8"} Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.795580 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7gl7" event={"ID":"7bffe615-107e-43bb-a1c6-abcb3684ecc5","Type":"ContainerStarted","Data":"163da85caee727ea769ed7de7937bec167b57d55c6c0655d9647a59a8a62d617"} Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.797521 4996 generic.go:334] "Generic (PLEG): container finished" podID="aa038159-c228-4d5d-bf86-18fa4e8c489d" containerID="b4074f219a35192b0829e7af4f0f872c449d40913b4302a4b73487c04d902711" exitCode=0 Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.799717 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw467" event={"ID":"aa038159-c228-4d5d-bf86-18fa4e8c489d","Type":"ContainerDied","Data":"b4074f219a35192b0829e7af4f0f872c449d40913b4302a4b73487c04d902711"} Feb 28 09:08:19 crc kubenswrapper[4996]: I0228 09:08:19.799745 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw467" event={"ID":"aa038159-c228-4d5d-bf86-18fa4e8c489d","Type":"ContainerStarted","Data":"ae4eb4555bfd2d46f7050cbd7b4613bed3286346a29c9e559228a93ff58c2bfd"} Feb 28 09:08:20 crc kubenswrapper[4996]: I0228 09:08:20.805390 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7gl7" event={"ID":"7bffe615-107e-43bb-a1c6-abcb3684ecc5","Type":"ContainerStarted","Data":"4c8f836bc9a11a38106abba9c86b8053bb8c7c0eac2742d8ee5ed2016c2b6bd5"} Feb 28 09:08:20 crc kubenswrapper[4996]: I0228 09:08:20.807974 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw467" event={"ID":"aa038159-c228-4d5d-bf86-18fa4e8c489d","Type":"ContainerStarted","Data":"ac4465c3b612ee8f2e37597275c1f333e9b0f2c65790c5ab491f575c8fe547d3"} Feb 28 09:08:20 crc kubenswrapper[4996]: I0228 09:08:20.809900 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmjwf" event={"ID":"75473fc1-d880-4706-b4ff-9c95431be795","Type":"ContainerStarted","Data":"4c16fe75992f938f239dcf6917f7e5718720b991022424aba1c99db07fba2d0f"} Feb 28 09:08:20 crc kubenswrapper[4996]: I0228 09:08:20.838840 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hmjwf" podStartSLOduration=2.351867766 podStartE2EDuration="4.838823776s" podCreationTimestamp="2026-02-28 09:08:16 +0000 UTC" firstStartedPulling="2026-02-28 09:08:17.76078586 +0000 UTC m=+461.451588671" lastFinishedPulling="2026-02-28 09:08:20.24774183 +0000 UTC m=+463.938544681" observedRunningTime="2026-02-28 09:08:20.836431437 +0000 UTC m=+464.527234248" watchObservedRunningTime="2026-02-28 09:08:20.838823776 +0000 UTC m=+464.529626587" Feb 28 09:08:21 crc kubenswrapper[4996]: I0228 09:08:21.822728 4996 generic.go:334] "Generic (PLEG): container finished" podID="aa038159-c228-4d5d-bf86-18fa4e8c489d" containerID="ac4465c3b612ee8f2e37597275c1f333e9b0f2c65790c5ab491f575c8fe547d3" exitCode=0 Feb 28 09:08:21 crc kubenswrapper[4996]: I0228 09:08:21.822804 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw467" event={"ID":"aa038159-c228-4d5d-bf86-18fa4e8c489d","Type":"ContainerDied","Data":"ac4465c3b612ee8f2e37597275c1f333e9b0f2c65790c5ab491f575c8fe547d3"} Feb 28 09:08:21 crc kubenswrapper[4996]: I0228 09:08:21.825808 4996 generic.go:334] "Generic (PLEG): container finished" podID="7bffe615-107e-43bb-a1c6-abcb3684ecc5" containerID="4c8f836bc9a11a38106abba9c86b8053bb8c7c0eac2742d8ee5ed2016c2b6bd5" exitCode=0 Feb 28 09:08:21 crc kubenswrapper[4996]: I0228 09:08:21.825913 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7gl7" event={"ID":"7bffe615-107e-43bb-a1c6-abcb3684ecc5","Type":"ContainerDied","Data":"4c8f836bc9a11a38106abba9c86b8053bb8c7c0eac2742d8ee5ed2016c2b6bd5"} Feb 28 09:08:22 crc kubenswrapper[4996]: I0228 09:08:22.847738 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lw467" event={"ID":"aa038159-c228-4d5d-bf86-18fa4e8c489d","Type":"ContainerStarted","Data":"590d442848f1c850cd3003833bb6b8d1bb7e2c8c087ff429f8634eeff5c7d482"} Feb 28 09:08:22 crc kubenswrapper[4996]: I0228 09:08:22.849559 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c7gl7" event={"ID":"7bffe615-107e-43bb-a1c6-abcb3684ecc5","Type":"ContainerStarted","Data":"364b2cf4749b21b19f0fd2ef4e8c872e3136b4e2afeddbac5df5bfd9fb936ff9"} Feb 28 09:08:22 crc kubenswrapper[4996]: I0228 09:08:22.887671 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lw467" podStartSLOduration=2.459419524 podStartE2EDuration="4.887655247s" podCreationTimestamp="2026-02-28 09:08:18 +0000 UTC" firstStartedPulling="2026-02-28 09:08:19.800901168 +0000 UTC m=+463.491703979" lastFinishedPulling="2026-02-28 09:08:22.229136891 +0000 UTC m=+465.919939702" observedRunningTime="2026-02-28 09:08:22.867710589 +0000 UTC m=+466.558513400" watchObservedRunningTime="2026-02-28 09:08:22.887655247 +0000 UTC m=+466.578458068" Feb 28 09:08:26 crc kubenswrapper[4996]: I0228 09:08:26.175360 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:26 crc kubenswrapper[4996]: I0228 09:08:26.175764 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:26 crc kubenswrapper[4996]: I0228 09:08:26.240235 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:26 crc kubenswrapper[4996]: I0228 09:08:26.256351 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c7gl7" podStartSLOduration=5.8365927939999995 podStartE2EDuration="8.256328319s" podCreationTimestamp="2026-02-28 09:08:18 +0000 UTC" firstStartedPulling="2026-02-28 09:08:19.798638223 +0000 UTC m=+463.489441024" lastFinishedPulling="2026-02-28 09:08:22.218373738 +0000 UTC m=+465.909176549" observedRunningTime="2026-02-28 09:08:22.886664962 +0000 UTC m=+466.577467773" watchObservedRunningTime="2026-02-28 09:08:26.256328319 +0000 UTC m=+469.947131150" Feb 28 09:08:26 crc kubenswrapper[4996]: I0228 09:08:26.382444 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:26 crc kubenswrapper[4996]: I0228 09:08:26.382495 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:26 crc kubenswrapper[4996]: I0228 09:08:26.932629 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b4wzn" Feb 28 09:08:27 crc kubenswrapper[4996]: I0228 09:08:27.431083 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hmjwf" podUID="75473fc1-d880-4706-b4ff-9c95431be795" containerName="registry-server" probeResult="failure" output=< Feb 28 09:08:27 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 09:08:27 crc kubenswrapper[4996]: > Feb 28 09:08:28 crc kubenswrapper[4996]: I0228 09:08:28.583952 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:28 crc kubenswrapper[4996]: I0228 09:08:28.584001 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:28 crc kubenswrapper[4996]: I0228 09:08:28.648427 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:28 crc kubenswrapper[4996]: I0228 09:08:28.798628 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:28 crc kubenswrapper[4996]: I0228 09:08:28.798698 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:28 crc kubenswrapper[4996]: I0228 09:08:28.866520 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:28 crc kubenswrapper[4996]: I0228 09:08:28.951863 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lw467" Feb 28 09:08:28 crc kubenswrapper[4996]: I0228 09:08:28.954809 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c7gl7" Feb 28 09:08:36 crc kubenswrapper[4996]: I0228 09:08:36.426445 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:08:36 crc kubenswrapper[4996]: I0228 09:08:36.494997 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hmjwf" Feb 28 09:09:37 crc kubenswrapper[4996]: I0228 09:09:37.404073 4996 scope.go:117] "RemoveContainer" containerID="814d2a9b489fa830ccd85a7779e4da2a2529a9c7fa3a9f2dd03c432b1689e652" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.155356 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537830-lfn9x"] Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.158068 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537830-lfn9x" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.163855 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.167023 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.167251 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.170105 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxnsr\" (UniqueName: \"kubernetes.io/projected/0c9ffe54-8ddf-4937-af96-bf07551c9890-kube-api-access-nxnsr\") pod \"auto-csr-approver-29537830-lfn9x\" (UID: \"0c9ffe54-8ddf-4937-af96-bf07551c9890\") " pod="openshift-infra/auto-csr-approver-29537830-lfn9x" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.170320 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537830-lfn9x"] Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.271865 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxnsr\" (UniqueName: \"kubernetes.io/projected/0c9ffe54-8ddf-4937-af96-bf07551c9890-kube-api-access-nxnsr\") pod \"auto-csr-approver-29537830-lfn9x\" (UID: \"0c9ffe54-8ddf-4937-af96-bf07551c9890\") " pod="openshift-infra/auto-csr-approver-29537830-lfn9x" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.290499 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxnsr\" (UniqueName: \"kubernetes.io/projected/0c9ffe54-8ddf-4937-af96-bf07551c9890-kube-api-access-nxnsr\") pod \"auto-csr-approver-29537830-lfn9x\" (UID: \"0c9ffe54-8ddf-4937-af96-bf07551c9890\") " pod="openshift-infra/auto-csr-approver-29537830-lfn9x" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.484708 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537830-lfn9x" Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.703434 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537830-lfn9x"] Feb 28 09:10:00 crc kubenswrapper[4996]: W0228 09:10:00.712253 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c9ffe54_8ddf_4937_af96_bf07551c9890.slice/crio-afc865a7476500d93d6d095d1f8876a87fb366a01a3b2fa994bb0506749fdbb9 WatchSource:0}: Error finding container afc865a7476500d93d6d095d1f8876a87fb366a01a3b2fa994bb0506749fdbb9: Status 404 returned error can't find the container with id afc865a7476500d93d6d095d1f8876a87fb366a01a3b2fa994bb0506749fdbb9 Feb 28 09:10:00 crc kubenswrapper[4996]: I0228 09:10:00.717084 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:10:01 crc kubenswrapper[4996]: I0228 09:10:01.521822 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537830-lfn9x" event={"ID":"0c9ffe54-8ddf-4937-af96-bf07551c9890","Type":"ContainerStarted","Data":"afc865a7476500d93d6d095d1f8876a87fb366a01a3b2fa994bb0506749fdbb9"} Feb 28 09:10:02 crc kubenswrapper[4996]: I0228 09:10:02.530090 4996 generic.go:334] "Generic (PLEG): container finished" podID="0c9ffe54-8ddf-4937-af96-bf07551c9890" containerID="0c1dac682c65079cbfba1379251d72657f952987e21c49c2128b43019d5efef6" exitCode=0 Feb 28 09:10:02 crc kubenswrapper[4996]: I0228 09:10:02.530152 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537830-lfn9x" event={"ID":"0c9ffe54-8ddf-4937-af96-bf07551c9890","Type":"ContainerDied","Data":"0c1dac682c65079cbfba1379251d72657f952987e21c49c2128b43019d5efef6"} Feb 28 09:10:03 crc kubenswrapper[4996]: I0228 09:10:03.787615 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537830-lfn9x" Feb 28 09:10:03 crc kubenswrapper[4996]: I0228 09:10:03.811087 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxnsr\" (UniqueName: \"kubernetes.io/projected/0c9ffe54-8ddf-4937-af96-bf07551c9890-kube-api-access-nxnsr\") pod \"0c9ffe54-8ddf-4937-af96-bf07551c9890\" (UID: \"0c9ffe54-8ddf-4937-af96-bf07551c9890\") " Feb 28 09:10:03 crc kubenswrapper[4996]: I0228 09:10:03.819463 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c9ffe54-8ddf-4937-af96-bf07551c9890-kube-api-access-nxnsr" (OuterVolumeSpecName: "kube-api-access-nxnsr") pod "0c9ffe54-8ddf-4937-af96-bf07551c9890" (UID: "0c9ffe54-8ddf-4937-af96-bf07551c9890"). InnerVolumeSpecName "kube-api-access-nxnsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:10:03 crc kubenswrapper[4996]: I0228 09:10:03.912523 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxnsr\" (UniqueName: \"kubernetes.io/projected/0c9ffe54-8ddf-4937-af96-bf07551c9890-kube-api-access-nxnsr\") on node \"crc\" DevicePath \"\"" Feb 28 09:10:04 crc kubenswrapper[4996]: I0228 09:10:04.545994 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537830-lfn9x" event={"ID":"0c9ffe54-8ddf-4937-af96-bf07551c9890","Type":"ContainerDied","Data":"afc865a7476500d93d6d095d1f8876a87fb366a01a3b2fa994bb0506749fdbb9"} Feb 28 09:10:04 crc kubenswrapper[4996]: I0228 09:10:04.546458 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc865a7476500d93d6d095d1f8876a87fb366a01a3b2fa994bb0506749fdbb9" Feb 28 09:10:04 crc kubenswrapper[4996]: I0228 09:10:04.546111 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537830-lfn9x" Feb 28 09:10:04 crc kubenswrapper[4996]: I0228 09:10:04.867482 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537824-mtpqq"] Feb 28 09:10:04 crc kubenswrapper[4996]: I0228 09:10:04.877056 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537824-mtpqq"] Feb 28 09:10:05 crc kubenswrapper[4996]: I0228 09:10:05.049172 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4222bc36-fe78-4dba-a558-af3b4fb70d56" path="/var/lib/kubelet/pods/4222bc36-fe78-4dba-a558-af3b4fb70d56/volumes" Feb 28 09:10:12 crc kubenswrapper[4996]: I0228 09:10:12.249059 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:10:12 crc kubenswrapper[4996]: I0228 09:10:12.249715 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.144198 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4w2v7"] Feb 28 09:10:17 crc kubenswrapper[4996]: E0228 09:10:17.146355 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c9ffe54-8ddf-4937-af96-bf07551c9890" containerName="oc" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.146386 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c9ffe54-8ddf-4937-af96-bf07551c9890" containerName="oc" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.146670 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c9ffe54-8ddf-4937-af96-bf07551c9890" containerName="oc" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.147284 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.163599 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4w2v7"] Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.326136 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-registry-tls\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.326194 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b578bee2-0bab-452f-a4d5-af90a24fd89c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.326224 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b578bee2-0bab-452f-a4d5-af90a24fd89c-trusted-ca\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.326263 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b578bee2-0bab-452f-a4d5-af90a24fd89c-registry-certificates\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.326288 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-bound-sa-token\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.326339 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls2m5\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-kube-api-access-ls2m5\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.326394 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.326412 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b578bee2-0bab-452f-a4d5-af90a24fd89c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.360440 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.427452 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b578bee2-0bab-452f-a4d5-af90a24fd89c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.427547 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-registry-tls\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.427603 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b578bee2-0bab-452f-a4d5-af90a24fd89c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.427659 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b578bee2-0bab-452f-a4d5-af90a24fd89c-trusted-ca\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.427694 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b578bee2-0bab-452f-a4d5-af90a24fd89c-registry-certificates\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.427747 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-bound-sa-token\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.427789 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls2m5\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-kube-api-access-ls2m5\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.428747 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b578bee2-0bab-452f-a4d5-af90a24fd89c-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.429863 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b578bee2-0bab-452f-a4d5-af90a24fd89c-trusted-ca\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.430040 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b578bee2-0bab-452f-a4d5-af90a24fd89c-registry-certificates\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.434076 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b578bee2-0bab-452f-a4d5-af90a24fd89c-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.436560 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-registry-tls\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.447818 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-bound-sa-token\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.460871 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls2m5\" (UniqueName: \"kubernetes.io/projected/b578bee2-0bab-452f-a4d5-af90a24fd89c-kube-api-access-ls2m5\") pod \"image-registry-66df7c8f76-4w2v7\" (UID: \"b578bee2-0bab-452f-a4d5-af90a24fd89c\") " pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.468053 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:17 crc kubenswrapper[4996]: I0228 09:10:17.710056 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4w2v7"] Feb 28 09:10:18 crc kubenswrapper[4996]: I0228 09:10:18.647301 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" event={"ID":"b578bee2-0bab-452f-a4d5-af90a24fd89c","Type":"ContainerStarted","Data":"54a5975404ec5160ceec8b9468889df790c9b47dcb20c2c734409b3f28821777"} Feb 28 09:10:18 crc kubenswrapper[4996]: I0228 09:10:18.649717 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" event={"ID":"b578bee2-0bab-452f-a4d5-af90a24fd89c","Type":"ContainerStarted","Data":"7a751b69a822325166036f88ba1a8040c3badca55b40ea89dba954d8c02a3262"} Feb 28 09:10:18 crc kubenswrapper[4996]: I0228 09:10:18.649993 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:18 crc kubenswrapper[4996]: I0228 09:10:18.673813 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" podStartSLOduration=1.673794895 podStartE2EDuration="1.673794895s" podCreationTimestamp="2026-02-28 09:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:10:18.6707848 +0000 UTC m=+582.361587621" watchObservedRunningTime="2026-02-28 09:10:18.673794895 +0000 UTC m=+582.364597716" Feb 28 09:10:37 crc kubenswrapper[4996]: I0228 09:10:37.454479 4996 scope.go:117] "RemoveContainer" containerID="a733d508e45d10950be3b8fa0e70d6e61c8806293fadd4e148b16f55b31fd497" Feb 28 09:10:37 crc kubenswrapper[4996]: I0228 09:10:37.486172 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4w2v7" Feb 28 09:10:37 crc kubenswrapper[4996]: I0228 09:10:37.568875 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k6wjx"] Feb 28 09:10:42 crc kubenswrapper[4996]: I0228 09:10:42.249304 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:10:42 crc kubenswrapper[4996]: I0228 09:10:42.249966 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:11:02 crc kubenswrapper[4996]: I0228 09:11:02.613685 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" podUID="a17bc456-8bc4-464f-a3d4-3d9ac9985870" containerName="registry" containerID="cri-o://4d9af0f0514566a0763458868b6b1bab56e18c147abefee67dbe88ac34190053" gracePeriod=30 Feb 28 09:11:02 crc kubenswrapper[4996]: I0228 09:11:02.949210 4996 generic.go:334] "Generic (PLEG): container finished" podID="a17bc456-8bc4-464f-a3d4-3d9ac9985870" containerID="4d9af0f0514566a0763458868b6b1bab56e18c147abefee67dbe88ac34190053" exitCode=0 Feb 28 09:11:02 crc kubenswrapper[4996]: I0228 09:11:02.949253 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" event={"ID":"a17bc456-8bc4-464f-a3d4-3d9ac9985870","Type":"ContainerDied","Data":"4d9af0f0514566a0763458868b6b1bab56e18c147abefee67dbe88ac34190053"} Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.009641 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.135698 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-bound-sa-token\") pod \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.136395 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-tls\") pod \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.136810 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.137095 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-trusted-ca\") pod \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.137306 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a17bc456-8bc4-464f-a3d4-3d9ac9985870-ca-trust-extracted\") pod \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.137534 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-certificates\") pod \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.137819 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a17bc456-8bc4-464f-a3d4-3d9ac9985870" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.139597 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a17bc456-8bc4-464f-a3d4-3d9ac9985870" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.139826 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a17bc456-8bc4-464f-a3d4-3d9ac9985870-installation-pull-secrets\") pod \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.140791 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s8rt\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-kube-api-access-7s8rt\") pod \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\" (UID: \"a17bc456-8bc4-464f-a3d4-3d9ac9985870\") " Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.141835 4996 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.141880 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a17bc456-8bc4-464f-a3d4-3d9ac9985870-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.144534 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a17bc456-8bc4-464f-a3d4-3d9ac9985870-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a17bc456-8bc4-464f-a3d4-3d9ac9985870" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.144599 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a17bc456-8bc4-464f-a3d4-3d9ac9985870" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.145232 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a17bc456-8bc4-464f-a3d4-3d9ac9985870" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.155037 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a17bc456-8bc4-464f-a3d4-3d9ac9985870-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a17bc456-8bc4-464f-a3d4-3d9ac9985870" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.155791 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-kube-api-access-7s8rt" (OuterVolumeSpecName: "kube-api-access-7s8rt") pod "a17bc456-8bc4-464f-a3d4-3d9ac9985870" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870"). InnerVolumeSpecName "kube-api-access-7s8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.156064 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a17bc456-8bc4-464f-a3d4-3d9ac9985870" (UID: "a17bc456-8bc4-464f-a3d4-3d9ac9985870"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.242573 4996 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.242612 4996 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a17bc456-8bc4-464f-a3d4-3d9ac9985870-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.242637 4996 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a17bc456-8bc4-464f-a3d4-3d9ac9985870-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.242661 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s8rt\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-kube-api-access-7s8rt\") on node \"crc\" DevicePath \"\"" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.242677 4996 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a17bc456-8bc4-464f-a3d4-3d9ac9985870-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.960427 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" event={"ID":"a17bc456-8bc4-464f-a3d4-3d9ac9985870","Type":"ContainerDied","Data":"5b978bf961f563345b577001256c8c05b9db261b01669fc9eea341a60e55f68f"} Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.960507 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-k6wjx" Feb 28 09:11:03 crc kubenswrapper[4996]: I0228 09:11:03.962285 4996 scope.go:117] "RemoveContainer" containerID="4d9af0f0514566a0763458868b6b1bab56e18c147abefee67dbe88ac34190053" Feb 28 09:11:04 crc kubenswrapper[4996]: I0228 09:11:04.019130 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k6wjx"] Feb 28 09:11:04 crc kubenswrapper[4996]: I0228 09:11:04.026629 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-k6wjx"] Feb 28 09:11:05 crc kubenswrapper[4996]: I0228 09:11:05.043223 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17bc456-8bc4-464f-a3d4-3d9ac9985870" path="/var/lib/kubelet/pods/a17bc456-8bc4-464f-a3d4-3d9ac9985870/volumes" Feb 28 09:11:12 crc kubenswrapper[4996]: I0228 09:11:12.249621 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:11:12 crc kubenswrapper[4996]: I0228 09:11:12.250086 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:11:12 crc kubenswrapper[4996]: I0228 09:11:12.250173 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:11:12 crc kubenswrapper[4996]: I0228 09:11:12.250980 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3b6da15faf8b8661d31491d68582f55b569ea0ca1baae1efe37fa713b132293"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:11:12 crc kubenswrapper[4996]: I0228 09:11:12.251120 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://e3b6da15faf8b8661d31491d68582f55b569ea0ca1baae1efe37fa713b132293" gracePeriod=600 Feb 28 09:11:13 crc kubenswrapper[4996]: I0228 09:11:13.022972 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="e3b6da15faf8b8661d31491d68582f55b569ea0ca1baae1efe37fa713b132293" exitCode=0 Feb 28 09:11:13 crc kubenswrapper[4996]: I0228 09:11:13.023036 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"e3b6da15faf8b8661d31491d68582f55b569ea0ca1baae1efe37fa713b132293"} Feb 28 09:11:13 crc kubenswrapper[4996]: I0228 09:11:13.023377 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"15be7a1e2ea878c9bfdd0618662f5c8e4a5e11c78306b49a8bacc5ba71758e6f"} Feb 28 09:11:13 crc kubenswrapper[4996]: I0228 09:11:13.023401 4996 scope.go:117] "RemoveContainer" containerID="346752807d9a3626d399fabf78210641bf6ab96ef710b50bafdc570ef4223171" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.136305 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537832-gmxhr"] Feb 28 09:12:00 crc kubenswrapper[4996]: E0228 09:12:00.137130 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a17bc456-8bc4-464f-a3d4-3d9ac9985870" containerName="registry" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.137152 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a17bc456-8bc4-464f-a3d4-3d9ac9985870" containerName="registry" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.137367 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a17bc456-8bc4-464f-a3d4-3d9ac9985870" containerName="registry" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.137815 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537832-gmxhr" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.141351 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.142869 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.143134 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.144224 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537832-gmxhr"] Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.247572 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf99m\" (UniqueName: \"kubernetes.io/projected/5deff085-2423-4d2c-aac3-9e3ce4247b77-kube-api-access-jf99m\") pod \"auto-csr-approver-29537832-gmxhr\" (UID: \"5deff085-2423-4d2c-aac3-9e3ce4247b77\") " pod="openshift-infra/auto-csr-approver-29537832-gmxhr" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.349028 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf99m\" (UniqueName: \"kubernetes.io/projected/5deff085-2423-4d2c-aac3-9e3ce4247b77-kube-api-access-jf99m\") pod \"auto-csr-approver-29537832-gmxhr\" (UID: \"5deff085-2423-4d2c-aac3-9e3ce4247b77\") " pod="openshift-infra/auto-csr-approver-29537832-gmxhr" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.382069 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf99m\" (UniqueName: \"kubernetes.io/projected/5deff085-2423-4d2c-aac3-9e3ce4247b77-kube-api-access-jf99m\") pod \"auto-csr-approver-29537832-gmxhr\" (UID: \"5deff085-2423-4d2c-aac3-9e3ce4247b77\") " pod="openshift-infra/auto-csr-approver-29537832-gmxhr" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.462412 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537832-gmxhr" Feb 28 09:12:00 crc kubenswrapper[4996]: I0228 09:12:00.901760 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537832-gmxhr"] Feb 28 09:12:01 crc kubenswrapper[4996]: I0228 09:12:01.361056 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537832-gmxhr" event={"ID":"5deff085-2423-4d2c-aac3-9e3ce4247b77","Type":"ContainerStarted","Data":"6897b78fc0ba41a9ba4aa29a5ed91f0dc2a8b304f93995b8fa17fee23a89e7e0"} Feb 28 09:12:02 crc kubenswrapper[4996]: I0228 09:12:02.369957 4996 generic.go:334] "Generic (PLEG): container finished" podID="5deff085-2423-4d2c-aac3-9e3ce4247b77" containerID="2a704e235144c7136a09dc9ab820ed9f441017f15606456eef364755dedc81b0" exitCode=0 Feb 28 09:12:02 crc kubenswrapper[4996]: I0228 09:12:02.370039 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537832-gmxhr" event={"ID":"5deff085-2423-4d2c-aac3-9e3ce4247b77","Type":"ContainerDied","Data":"2a704e235144c7136a09dc9ab820ed9f441017f15606456eef364755dedc81b0"} Feb 28 09:12:03 crc kubenswrapper[4996]: I0228 09:12:03.694890 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537832-gmxhr" Feb 28 09:12:03 crc kubenswrapper[4996]: I0228 09:12:03.706597 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jf99m\" (UniqueName: \"kubernetes.io/projected/5deff085-2423-4d2c-aac3-9e3ce4247b77-kube-api-access-jf99m\") pod \"5deff085-2423-4d2c-aac3-9e3ce4247b77\" (UID: \"5deff085-2423-4d2c-aac3-9e3ce4247b77\") " Feb 28 09:12:03 crc kubenswrapper[4996]: I0228 09:12:03.717842 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5deff085-2423-4d2c-aac3-9e3ce4247b77-kube-api-access-jf99m" (OuterVolumeSpecName: "kube-api-access-jf99m") pod "5deff085-2423-4d2c-aac3-9e3ce4247b77" (UID: "5deff085-2423-4d2c-aac3-9e3ce4247b77"). InnerVolumeSpecName "kube-api-access-jf99m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:12:03 crc kubenswrapper[4996]: I0228 09:12:03.807488 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jf99m\" (UniqueName: \"kubernetes.io/projected/5deff085-2423-4d2c-aac3-9e3ce4247b77-kube-api-access-jf99m\") on node \"crc\" DevicePath \"\"" Feb 28 09:12:04 crc kubenswrapper[4996]: I0228 09:12:04.386043 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537832-gmxhr" event={"ID":"5deff085-2423-4d2c-aac3-9e3ce4247b77","Type":"ContainerDied","Data":"6897b78fc0ba41a9ba4aa29a5ed91f0dc2a8b304f93995b8fa17fee23a89e7e0"} Feb 28 09:12:04 crc kubenswrapper[4996]: I0228 09:12:04.386337 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6897b78fc0ba41a9ba4aa29a5ed91f0dc2a8b304f93995b8fa17fee23a89e7e0" Feb 28 09:12:04 crc kubenswrapper[4996]: I0228 09:12:04.386129 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537832-gmxhr" Feb 28 09:12:04 crc kubenswrapper[4996]: I0228 09:12:04.770683 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537826-bhwkq"] Feb 28 09:12:04 crc kubenswrapper[4996]: I0228 09:12:04.775889 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537826-bhwkq"] Feb 28 09:12:05 crc kubenswrapper[4996]: I0228 09:12:05.045993 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="540fba90-cd05-4d58-b37b-25c4dedaf95e" path="/var/lib/kubelet/pods/540fba90-cd05-4d58-b37b-25c4dedaf95e/volumes" Feb 28 09:12:37 crc kubenswrapper[4996]: I0228 09:12:37.605034 4996 scope.go:117] "RemoveContainer" containerID="ee5ee9fa21014fca1aa2700ae0ff95a7c43bb7b934751dec30d08f8e1013817f" Feb 28 09:13:12 crc kubenswrapper[4996]: I0228 09:13:12.248696 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:13:12 crc kubenswrapper[4996]: I0228 09:13:12.249340 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.008302 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g"] Feb 28 09:13:26 crc kubenswrapper[4996]: E0228 09:13:26.009075 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5deff085-2423-4d2c-aac3-9e3ce4247b77" containerName="oc" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.009089 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5deff085-2423-4d2c-aac3-9e3ce4247b77" containerName="oc" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.009169 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5deff085-2423-4d2c-aac3-9e3ce4247b77" containerName="oc" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.009521 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.012291 4996 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-cdwzw" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.012455 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.012465 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.037379 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-z92ks"] Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.038175 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-z92ks" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.040273 4996 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6njgm" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.048810 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g"] Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.058388 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-z92ks"] Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.063788 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wl22t"] Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.064565 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.067897 4996 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-t7bjl" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.074901 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rfk5\" (UniqueName: \"kubernetes.io/projected/e5269cb9-2bff-4476-92a8-fc85304fe923-kube-api-access-7rfk5\") pod \"cert-manager-858654f9db-z92ks\" (UID: \"e5269cb9-2bff-4476-92a8-fc85304fe923\") " pod="cert-manager/cert-manager-858654f9db-z92ks" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.075048 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pp98\" (UniqueName: \"kubernetes.io/projected/6305a0f6-5022-49e7-b7a3-e41862e0bfbc-kube-api-access-6pp98\") pod \"cert-manager-cainjector-cf98fcc89-fmb5g\" (UID: \"6305a0f6-5022-49e7-b7a3-e41862e0bfbc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.080965 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wl22t"] Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.176456 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pp98\" (UniqueName: \"kubernetes.io/projected/6305a0f6-5022-49e7-b7a3-e41862e0bfbc-kube-api-access-6pp98\") pod \"cert-manager-cainjector-cf98fcc89-fmb5g\" (UID: \"6305a0f6-5022-49e7-b7a3-e41862e0bfbc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.176547 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dc9l\" (UniqueName: \"kubernetes.io/projected/7db94663-acd6-4e4c-a203-2cec2afad8da-kube-api-access-6dc9l\") pod \"cert-manager-webhook-687f57d79b-wl22t\" (UID: \"7db94663-acd6-4e4c-a203-2cec2afad8da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.176585 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rfk5\" (UniqueName: \"kubernetes.io/projected/e5269cb9-2bff-4476-92a8-fc85304fe923-kube-api-access-7rfk5\") pod \"cert-manager-858654f9db-z92ks\" (UID: \"e5269cb9-2bff-4476-92a8-fc85304fe923\") " pod="cert-manager/cert-manager-858654f9db-z92ks" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.203000 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pp98\" (UniqueName: \"kubernetes.io/projected/6305a0f6-5022-49e7-b7a3-e41862e0bfbc-kube-api-access-6pp98\") pod \"cert-manager-cainjector-cf98fcc89-fmb5g\" (UID: \"6305a0f6-5022-49e7-b7a3-e41862e0bfbc\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.206672 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rfk5\" (UniqueName: \"kubernetes.io/projected/e5269cb9-2bff-4476-92a8-fc85304fe923-kube-api-access-7rfk5\") pod \"cert-manager-858654f9db-z92ks\" (UID: \"e5269cb9-2bff-4476-92a8-fc85304fe923\") " pod="cert-manager/cert-manager-858654f9db-z92ks" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.278077 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dc9l\" (UniqueName: \"kubernetes.io/projected/7db94663-acd6-4e4c-a203-2cec2afad8da-kube-api-access-6dc9l\") pod \"cert-manager-webhook-687f57d79b-wl22t\" (UID: \"7db94663-acd6-4e4c-a203-2cec2afad8da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.295728 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dc9l\" (UniqueName: \"kubernetes.io/projected/7db94663-acd6-4e4c-a203-2cec2afad8da-kube-api-access-6dc9l\") pod \"cert-manager-webhook-687f57d79b-wl22t\" (UID: \"7db94663-acd6-4e4c-a203-2cec2afad8da\") " pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.328150 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.359678 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-z92ks" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.380924 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.642933 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-wl22t"] Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.762987 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g"] Feb 28 09:13:26 crc kubenswrapper[4996]: W0228 09:13:26.767356 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6305a0f6_5022_49e7_b7a3_e41862e0bfbc.slice/crio-7a4c2a9e44ae9effc4fe84761ec7fe38f4bbbdd599c11a94cf0c28eaa5466ab0 WatchSource:0}: Error finding container 7a4c2a9e44ae9effc4fe84761ec7fe38f4bbbdd599c11a94cf0c28eaa5466ab0: Status 404 returned error can't find the container with id 7a4c2a9e44ae9effc4fe84761ec7fe38f4bbbdd599c11a94cf0c28eaa5466ab0 Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.826277 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-z92ks"] Feb 28 09:13:26 crc kubenswrapper[4996]: W0228 09:13:26.828475 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5269cb9_2bff_4476_92a8_fc85304fe923.slice/crio-3b5e28b2cd3c3183068e6f1ab422defe14026fff52f3d2356baef840d5a46bf8 WatchSource:0}: Error finding container 3b5e28b2cd3c3183068e6f1ab422defe14026fff52f3d2356baef840d5a46bf8: Status 404 returned error can't find the container with id 3b5e28b2cd3c3183068e6f1ab422defe14026fff52f3d2356baef840d5a46bf8 Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.953286 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g" event={"ID":"6305a0f6-5022-49e7-b7a3-e41862e0bfbc","Type":"ContainerStarted","Data":"7a4c2a9e44ae9effc4fe84761ec7fe38f4bbbdd599c11a94cf0c28eaa5466ab0"} Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.954787 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" event={"ID":"7db94663-acd6-4e4c-a203-2cec2afad8da","Type":"ContainerStarted","Data":"cf14c05c81d6665068e55f36cc25d23c4db7d31a848a7034d205fd2addbadac1"} Feb 28 09:13:26 crc kubenswrapper[4996]: I0228 09:13:26.956523 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-z92ks" event={"ID":"e5269cb9-2bff-4476-92a8-fc85304fe923","Type":"ContainerStarted","Data":"3b5e28b2cd3c3183068e6f1ab422defe14026fff52f3d2356baef840d5a46bf8"} Feb 28 09:13:29 crc kubenswrapper[4996]: I0228 09:13:29.974292 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g" event={"ID":"6305a0f6-5022-49e7-b7a3-e41862e0bfbc","Type":"ContainerStarted","Data":"41acfb757a7f5221d0f75cbc7ab59dd74373b78759b54c4c16919f7acef2e0c1"} Feb 28 09:13:29 crc kubenswrapper[4996]: I0228 09:13:29.987679 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" event={"ID":"7db94663-acd6-4e4c-a203-2cec2afad8da","Type":"ContainerStarted","Data":"7a7ef98b7aa26ed8e042c4065693a165cd11df55ee9a17af79a50cbeee01a0b5"} Feb 28 09:13:29 crc kubenswrapper[4996]: I0228 09:13:29.989411 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" Feb 28 09:13:30 crc kubenswrapper[4996]: I0228 09:13:30.009389 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fmb5g" podStartSLOduration=2.504228223 podStartE2EDuration="5.009343211s" podCreationTimestamp="2026-02-28 09:13:25 +0000 UTC" firstStartedPulling="2026-02-28 09:13:26.771232471 +0000 UTC m=+770.462035302" lastFinishedPulling="2026-02-28 09:13:29.276347459 +0000 UTC m=+772.967150290" observedRunningTime="2026-02-28 09:13:29.988452035 +0000 UTC m=+773.679254856" watchObservedRunningTime="2026-02-28 09:13:30.009343211 +0000 UTC m=+773.700146072" Feb 28 09:13:30 crc kubenswrapper[4996]: I0228 09:13:30.009865 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" podStartSLOduration=1.443444604 podStartE2EDuration="4.009853994s" podCreationTimestamp="2026-02-28 09:13:26 +0000 UTC" firstStartedPulling="2026-02-28 09:13:26.65043445 +0000 UTC m=+770.341237261" lastFinishedPulling="2026-02-28 09:13:29.21684382 +0000 UTC m=+772.907646651" observedRunningTime="2026-02-28 09:13:30.007623809 +0000 UTC m=+773.698426650" watchObservedRunningTime="2026-02-28 09:13:30.009853994 +0000 UTC m=+773.700656835" Feb 28 09:13:32 crc kubenswrapper[4996]: I0228 09:13:32.005766 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-z92ks" event={"ID":"e5269cb9-2bff-4476-92a8-fc85304fe923","Type":"ContainerStarted","Data":"4efa6c3188c2849b58081a3f3e924ef374d6d4815c9569c37fb22e5504aaf472"} Feb 28 09:13:32 crc kubenswrapper[4996]: I0228 09:13:32.031179 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-z92ks" podStartSLOduration=1.861674974 podStartE2EDuration="6.031150953s" podCreationTimestamp="2026-02-28 09:13:26 +0000 UTC" firstStartedPulling="2026-02-28 09:13:26.832485211 +0000 UTC m=+770.523288032" lastFinishedPulling="2026-02-28 09:13:31.00196119 +0000 UTC m=+774.692764011" observedRunningTime="2026-02-28 09:13:32.027150697 +0000 UTC m=+775.717953508" watchObservedRunningTime="2026-02-28 09:13:32.031150953 +0000 UTC m=+775.721953794" Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.162854 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjj82"] Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.163472 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovn-controller" containerID="cri-o://740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20" gracePeriod=30 Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.163790 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="sbdb" containerID="cri-o://24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4" gracePeriod=30 Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.163827 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="nbdb" containerID="cri-o://f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71" gracePeriod=30 Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.163855 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="northd" containerID="cri-o://40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa" gracePeriod=30 Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.163880 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842" gracePeriod=30 Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.163912 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kube-rbac-proxy-node" containerID="cri-o://3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2" gracePeriod=30 Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.163969 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovn-acl-logging" containerID="cri-o://1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef" gracePeriod=30 Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.221129 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" containerID="cri-o://eeb43d191b894dbe97a619103a6478b4a588c7b898dbf3d0e9cc31d76a9b6291" gracePeriod=30 Feb 28 09:13:36 crc kubenswrapper[4996]: I0228 09:13:36.384994 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-wl22t" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.048127 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovnkube-controller/2.log" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.051199 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovn-acl-logging/0.log" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.051913 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovn-controller/0.log" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052565 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="eeb43d191b894dbe97a619103a6478b4a588c7b898dbf3d0e9cc31d76a9b6291" exitCode=0 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052625 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4" exitCode=0 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052588 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"eeb43d191b894dbe97a619103a6478b4a588c7b898dbf3d0e9cc31d76a9b6291"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052737 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052775 4996 scope.go:117] "RemoveContainer" containerID="f3ffc59e26d85a0da266bef9af530533b4459d5ba0b795fc3890c710690b4cf8" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052785 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052649 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71" exitCode=0 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052839 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa" exitCode=0 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052871 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842" exitCode=0 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052891 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2" exitCode=0 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052917 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef" exitCode=143 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052936 4996 generic.go:334] "Generic (PLEG): container finished" podID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerID="740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20" exitCode=143 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052954 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.052988 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.053016 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.053030 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.053043 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.055763 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-snglm_6ed5a0c7-4cae-4140-be04-b7a0f3899920/kube-multus/1.log" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.056281 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-snglm_6ed5a0c7-4cae-4140-be04-b7a0f3899920/kube-multus/0.log" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.056320 4996 generic.go:334] "Generic (PLEG): container finished" podID="6ed5a0c7-4cae-4140-be04-b7a0f3899920" containerID="f9163596ea18ff2974cb93f682ea825211ebda9e39a3d64a116037ee105d6806" exitCode=2 Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.056346 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-snglm" event={"ID":"6ed5a0c7-4cae-4140-be04-b7a0f3899920","Type":"ContainerDied","Data":"f9163596ea18ff2974cb93f682ea825211ebda9e39a3d64a116037ee105d6806"} Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.056835 4996 scope.go:117] "RemoveContainer" containerID="f9163596ea18ff2974cb93f682ea825211ebda9e39a3d64a116037ee105d6806" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.072520 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovn-acl-logging/0.log" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.073089 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hjj82_6730cd9d-a0be-4a00-966e-f936e7b888b6/ovn-controller/0.log" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.073572 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.078532 4996 scope.go:117] "RemoveContainer" containerID="18afe1ff8134f49f6dc6239df8149b055977d5c23f80500084191bc68b7ab21a" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169353 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d5qqc"] Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169624 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kubecfg-setup" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169639 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kubecfg-setup" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169651 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169661 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169674 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovn-acl-logging" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169683 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovn-acl-logging" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169695 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169702 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169710 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="sbdb" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169717 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="sbdb" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169725 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kube-rbac-proxy-node" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169733 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kube-rbac-proxy-node" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169748 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169756 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169768 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovn-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169776 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovn-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169789 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169797 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169808 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169816 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169828 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="northd" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169836 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="northd" Feb 28 09:13:37 crc kubenswrapper[4996]: E0228 09:13:37.169844 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="nbdb" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169851 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="nbdb" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169955 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169967 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovn-acl-logging" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169979 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169990 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="nbdb" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.169998 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.170029 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="kube-rbac-proxy-node" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.170040 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovn-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.170048 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="sbdb" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.170059 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="northd" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.170281 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.170294 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" containerName="ovnkube-controller" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.173783 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239205 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-ovn\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239470 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-script-lib\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239486 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-var-lib-openvswitch\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239507 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-env-overrides\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239528 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-slash\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239541 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-node-log\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239556 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-netd\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239578 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovn-node-metrics-cert\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239608 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-systemd\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239627 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-systemd-units\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239640 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239656 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-log-socket\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239675 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2k5h\" (UniqueName: \"kubernetes.io/projected/6730cd9d-a0be-4a00-966e-f936e7b888b6-kube-api-access-s2k5h\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239691 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-openvswitch\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239707 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-netns\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239721 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-ovn-kubernetes\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239735 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-bin\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239773 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-etc-openvswitch\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239790 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-config\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239808 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-kubelet\") pod \"6730cd9d-a0be-4a00-966e-f936e7b888b6\" (UID: \"6730cd9d-a0be-4a00-966e-f936e7b888b6\") " Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239340 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239960 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.239985 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240019 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240042 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240056 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-slash" (OuterVolumeSpecName: "host-slash") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240069 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-node-log" (OuterVolumeSpecName: "node-log") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240083 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240172 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240171 4996 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240209 4996 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240235 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240385 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-log-socket" (OuterVolumeSpecName: "log-socket") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240429 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240457 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240478 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240496 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240513 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.240542 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.246257 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.246853 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6730cd9d-a0be-4a00-966e-f936e7b888b6-kube-api-access-s2k5h" (OuterVolumeSpecName: "kube-api-access-s2k5h") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "kube-api-access-s2k5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.255577 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6730cd9d-a0be-4a00-966e-f936e7b888b6" (UID: "6730cd9d-a0be-4a00-966e-f936e7b888b6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341387 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-systemd\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341429 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-cni-netd\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341446 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-cni-bin\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341465 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-var-lib-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341488 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-run-netns\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341510 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-env-overrides\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341524 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovnkube-script-lib\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341541 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovn-node-metrics-cert\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341571 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341592 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-ovn\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341609 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-slash\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341625 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovnkube-config\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341641 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-systemd-units\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341662 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341680 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-node-log\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341696 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kjmk\" (UniqueName: \"kubernetes.io/projected/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-kube-api-access-6kjmk\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341711 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-kubelet\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341732 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-etc-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341751 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-run-ovn-kubernetes\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.341770 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-log-socket\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342077 4996 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342116 4996 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342126 4996 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-log-socket\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342135 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2k5h\" (UniqueName: \"kubernetes.io/projected/6730cd9d-a0be-4a00-966e-f936e7b888b6-kube-api-access-s2k5h\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342144 4996 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342152 4996 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342161 4996 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342169 4996 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342177 4996 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342185 4996 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342192 4996 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342200 4996 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342208 4996 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6730cd9d-a0be-4a00-966e-f936e7b888b6-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342215 4996 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-slash\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342223 4996 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-node-log\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342230 4996 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342238 4996 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6730cd9d-a0be-4a00-966e-f936e7b888b6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.342246 4996 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6730cd9d-a0be-4a00-966e-f936e7b888b6-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443573 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-run-ovn-kubernetes\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443642 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-log-socket\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443677 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-systemd\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443704 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-cni-netd\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443724 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-cni-bin\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443743 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-var-lib-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443772 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-run-netns\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443766 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-run-ovn-kubernetes\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443803 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-env-overrides\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443823 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovnkube-script-lib\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443845 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovn-node-metrics-cert\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443857 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-var-lib-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443872 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443897 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-cni-netd\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443904 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-ovn\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443901 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-log-socket\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443945 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-ovn\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443954 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443857 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-run-systemd\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.443911 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-cni-bin\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444065 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-slash\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444073 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-run-netns\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444205 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-slash\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444228 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovnkube-config\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444824 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-systemd-units\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444881 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444925 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-node-log\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444951 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kjmk\" (UniqueName: \"kubernetes.io/projected/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-kube-api-access-6kjmk\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.444976 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-kubelet\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.445055 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-etc-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.445089 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-systemd-units\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.445178 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-kubelet\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.445216 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-etc-openvswitch\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.445242 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-node-log\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.445268 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.445464 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-env-overrides\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.445693 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovnkube-config\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.446063 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovnkube-script-lib\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.448701 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-ovn-node-metrics-cert\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.474865 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kjmk\" (UniqueName: \"kubernetes.io/projected/9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46-kube-api-access-6kjmk\") pod \"ovnkube-node-d5qqc\" (UID: \"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46\") " pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.491723 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.539212 4996 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.677500 4996 scope.go:117] "RemoveContainer" containerID="40beb2601e93f219dd57d3b266bf77b7418293f3dd79380fc608918277839eaa" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.693609 4996 scope.go:117] "RemoveContainer" containerID="740f34a23024b7f09f9958ee3a267e981d73e796f07806c3b1c8b26290fb8b20" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.756843 4996 scope.go:117] "RemoveContainer" containerID="24f42362399ed34bd6500d57bbca6faa651b7966ae2166ee00c6446663a992c4" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.770571 4996 scope.go:117] "RemoveContainer" containerID="3928cd3052edb1debb214b538e2e03b95341a3381f29bd2a09be67a7bad900c2" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.783485 4996 scope.go:117] "RemoveContainer" containerID="eeb43d191b894dbe97a619103a6478b4a588c7b898dbf3d0e9cc31d76a9b6291" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.798601 4996 scope.go:117] "RemoveContainer" containerID="f09500c0de2e76152253f68cc34dcc06cfeb8b09f0d03d76a5c88c02936b1c71" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.813421 4996 scope.go:117] "RemoveContainer" containerID="969ed60a664902e99158b8713b41488ba9adaeb2e863e21846ab3b0677624842" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.827986 4996 scope.go:117] "RemoveContainer" containerID="c970cc2fc10b1b013833638367d8a0e0d5d3478285404bd6a5006612ce2fef49" Feb 28 09:13:37 crc kubenswrapper[4996]: I0228 09:13:37.844279 4996 scope.go:117] "RemoveContainer" containerID="1d66c538b1815d4adce18a439e70262bdeabe344429ab231cd3d2fc163d53fef" Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.065328 4996 generic.go:334] "Generic (PLEG): container finished" podID="9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46" containerID="82562275b12a5ef74f76fe678f436dc83d2222b9cf5907e4c3e15421ed0e9b43" exitCode=0 Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.065408 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerDied","Data":"82562275b12a5ef74f76fe678f436dc83d2222b9cf5907e4c3e15421ed0e9b43"} Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.065455 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"79cfef0feaec7452669a8a1db3369be5d4e3896211d8a87336a5f02194274eff"} Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.065476 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" event={"ID":"6730cd9d-a0be-4a00-966e-f936e7b888b6","Type":"ContainerDied","Data":"8aa6eb47ea63b41f30ca16fdb385ac84005b929603266e6dcf90fafdbd2ac4ab"} Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.070518 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-snglm_6ed5a0c7-4cae-4140-be04-b7a0f3899920/kube-multus/1.log" Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.070617 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hjj82" Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.070612 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-snglm" event={"ID":"6ed5a0c7-4cae-4140-be04-b7a0f3899920","Type":"ContainerStarted","Data":"9aae61285bdd68a07ccd853aa785d3c559d35f13ee41ebd700265167df988ee3"} Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.126125 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjj82"] Feb 28 09:13:38 crc kubenswrapper[4996]: I0228 09:13:38.132016 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hjj82"] Feb 28 09:13:39 crc kubenswrapper[4996]: I0228 09:13:39.039096 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6730cd9d-a0be-4a00-966e-f936e7b888b6" path="/var/lib/kubelet/pods/6730cd9d-a0be-4a00-966e-f936e7b888b6/volumes" Feb 28 09:13:39 crc kubenswrapper[4996]: I0228 09:13:39.077666 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"8128b6034789852173149059d73a07f1fcb40650d73b476203843ab595cea8fc"} Feb 28 09:13:39 crc kubenswrapper[4996]: I0228 09:13:39.077699 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"1dd9708a0f61ad20601f9270d0808f9f96f40c061f8e2b94bcb2add09d7749b9"} Feb 28 09:13:39 crc kubenswrapper[4996]: I0228 09:13:39.077709 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"b96162b2c03d5891bc8b226c9ee8b26a4c84fae295e6c06be97bf3dd15f2ea99"} Feb 28 09:13:39 crc kubenswrapper[4996]: I0228 09:13:39.077718 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"b7779a940964ed11f6e1a459d5594ec03f049e0992fb49be592edbc2caff1815"} Feb 28 09:13:39 crc kubenswrapper[4996]: I0228 09:13:39.077726 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"bca791f6821c56f408c54b41652acb74410dd7c426c54c9e9193fb88ef91bd6b"} Feb 28 09:13:39 crc kubenswrapper[4996]: I0228 09:13:39.077735 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"1c144ac45f084548e416d559088b546b4a2b8e20a1a8d4daff73eb115c15b9b0"} Feb 28 09:13:41 crc kubenswrapper[4996]: I0228 09:13:41.092534 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"f010c333fbe47c77bf48ed47df90d14d51bc2846eb65d019b1c7f519d76ada8f"} Feb 28 09:13:42 crc kubenswrapper[4996]: I0228 09:13:42.249455 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:13:42 crc kubenswrapper[4996]: I0228 09:13:42.249878 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:13:44 crc kubenswrapper[4996]: I0228 09:13:44.114419 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" event={"ID":"9c3a5633-6cdd-4744-9d3f-5b4a56ebdd46","Type":"ContainerStarted","Data":"1f66e2a8810ef6d522bb98e0f2f667e8af8b1e437035db04fcacfbd9d3631f14"} Feb 28 09:13:44 crc kubenswrapper[4996]: I0228 09:13:44.114779 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:44 crc kubenswrapper[4996]: I0228 09:13:44.114795 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:44 crc kubenswrapper[4996]: I0228 09:13:44.114808 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:44 crc kubenswrapper[4996]: I0228 09:13:44.147081 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" podStartSLOduration=7.147060122 podStartE2EDuration="7.147060122s" podCreationTimestamp="2026-02-28 09:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:13:44.144816756 +0000 UTC m=+787.835619567" watchObservedRunningTime="2026-02-28 09:13:44.147060122 +0000 UTC m=+787.837862953" Feb 28 09:13:44 crc kubenswrapper[4996]: I0228 09:13:44.159136 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:13:44 crc kubenswrapper[4996]: I0228 09:13:44.160600 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.146633 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537834-lt7m5"] Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.149233 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537834-lt7m5" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.152954 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.153294 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.153475 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.160588 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537834-lt7m5"] Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.260634 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbshf\" (UniqueName: \"kubernetes.io/projected/3d695de3-cc18-423c-bcda-2370449f8479-kube-api-access-nbshf\") pod \"auto-csr-approver-29537834-lt7m5\" (UID: \"3d695de3-cc18-423c-bcda-2370449f8479\") " pod="openshift-infra/auto-csr-approver-29537834-lt7m5" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.361738 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbshf\" (UniqueName: \"kubernetes.io/projected/3d695de3-cc18-423c-bcda-2370449f8479-kube-api-access-nbshf\") pod \"auto-csr-approver-29537834-lt7m5\" (UID: \"3d695de3-cc18-423c-bcda-2370449f8479\") " pod="openshift-infra/auto-csr-approver-29537834-lt7m5" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.394500 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbshf\" (UniqueName: \"kubernetes.io/projected/3d695de3-cc18-423c-bcda-2370449f8479-kube-api-access-nbshf\") pod \"auto-csr-approver-29537834-lt7m5\" (UID: \"3d695de3-cc18-423c-bcda-2370449f8479\") " pod="openshift-infra/auto-csr-approver-29537834-lt7m5" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.502338 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537834-lt7m5" Feb 28 09:14:00 crc kubenswrapper[4996]: I0228 09:14:00.755435 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537834-lt7m5"] Feb 28 09:14:00 crc kubenswrapper[4996]: W0228 09:14:00.766562 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d695de3_cc18_423c_bcda_2370449f8479.slice/crio-7f70f9a5a75b137ec5568635dac4fb3028e87d81b6d0669f6db142ff4c61dd3f WatchSource:0}: Error finding container 7f70f9a5a75b137ec5568635dac4fb3028e87d81b6d0669f6db142ff4c61dd3f: Status 404 returned error can't find the container with id 7f70f9a5a75b137ec5568635dac4fb3028e87d81b6d0669f6db142ff4c61dd3f Feb 28 09:14:01 crc kubenswrapper[4996]: I0228 09:14:01.239650 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537834-lt7m5" event={"ID":"3d695de3-cc18-423c-bcda-2370449f8479","Type":"ContainerStarted","Data":"7f70f9a5a75b137ec5568635dac4fb3028e87d81b6d0669f6db142ff4c61dd3f"} Feb 28 09:14:02 crc kubenswrapper[4996]: I0228 09:14:02.249582 4996 generic.go:334] "Generic (PLEG): container finished" podID="3d695de3-cc18-423c-bcda-2370449f8479" containerID="239e6b119bd0f7fa2300d4a7813fb51c9e2196a7432241432aa8841af3ce4b07" exitCode=0 Feb 28 09:14:02 crc kubenswrapper[4996]: I0228 09:14:02.249670 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537834-lt7m5" event={"ID":"3d695de3-cc18-423c-bcda-2370449f8479","Type":"ContainerDied","Data":"239e6b119bd0f7fa2300d4a7813fb51c9e2196a7432241432aa8841af3ce4b07"} Feb 28 09:14:03 crc kubenswrapper[4996]: I0228 09:14:03.576130 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537834-lt7m5" Feb 28 09:14:03 crc kubenswrapper[4996]: I0228 09:14:03.707541 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbshf\" (UniqueName: \"kubernetes.io/projected/3d695de3-cc18-423c-bcda-2370449f8479-kube-api-access-nbshf\") pod \"3d695de3-cc18-423c-bcda-2370449f8479\" (UID: \"3d695de3-cc18-423c-bcda-2370449f8479\") " Feb 28 09:14:03 crc kubenswrapper[4996]: I0228 09:14:03.717491 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d695de3-cc18-423c-bcda-2370449f8479-kube-api-access-nbshf" (OuterVolumeSpecName: "kube-api-access-nbshf") pod "3d695de3-cc18-423c-bcda-2370449f8479" (UID: "3d695de3-cc18-423c-bcda-2370449f8479"). InnerVolumeSpecName "kube-api-access-nbshf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:14:03 crc kubenswrapper[4996]: I0228 09:14:03.809754 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbshf\" (UniqueName: \"kubernetes.io/projected/3d695de3-cc18-423c-bcda-2370449f8479-kube-api-access-nbshf\") on node \"crc\" DevicePath \"\"" Feb 28 09:14:04 crc kubenswrapper[4996]: I0228 09:14:04.262448 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537834-lt7m5" event={"ID":"3d695de3-cc18-423c-bcda-2370449f8479","Type":"ContainerDied","Data":"7f70f9a5a75b137ec5568635dac4fb3028e87d81b6d0669f6db142ff4c61dd3f"} Feb 28 09:14:04 crc kubenswrapper[4996]: I0228 09:14:04.262483 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f70f9a5a75b137ec5568635dac4fb3028e87d81b6d0669f6db142ff4c61dd3f" Feb 28 09:14:04 crc kubenswrapper[4996]: I0228 09:14:04.262534 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537834-lt7m5" Feb 28 09:14:04 crc kubenswrapper[4996]: I0228 09:14:04.646730 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537828-jnp7f"] Feb 28 09:14:04 crc kubenswrapper[4996]: I0228 09:14:04.650944 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537828-jnp7f"] Feb 28 09:14:05 crc kubenswrapper[4996]: I0228 09:14:05.047495 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff4c34a9-7e1d-4afc-b670-9e0c70f0271d" path="/var/lib/kubelet/pods/ff4c34a9-7e1d-4afc-b670-9e0c70f0271d/volumes" Feb 28 09:14:07 crc kubenswrapper[4996]: I0228 09:14:07.530669 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d5qqc" Feb 28 09:14:12 crc kubenswrapper[4996]: I0228 09:14:12.248667 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:14:12 crc kubenswrapper[4996]: I0228 09:14:12.249142 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:14:12 crc kubenswrapper[4996]: I0228 09:14:12.249212 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:14:12 crc kubenswrapper[4996]: I0228 09:14:12.249928 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"15be7a1e2ea878c9bfdd0618662f5c8e4a5e11c78306b49a8bacc5ba71758e6f"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:14:12 crc kubenswrapper[4996]: I0228 09:14:12.250061 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://15be7a1e2ea878c9bfdd0618662f5c8e4a5e11c78306b49a8bacc5ba71758e6f" gracePeriod=600 Feb 28 09:14:13 crc kubenswrapper[4996]: I0228 09:14:13.341770 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="15be7a1e2ea878c9bfdd0618662f5c8e4a5e11c78306b49a8bacc5ba71758e6f" exitCode=0 Feb 28 09:14:13 crc kubenswrapper[4996]: I0228 09:14:13.341824 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"15be7a1e2ea878c9bfdd0618662f5c8e4a5e11c78306b49a8bacc5ba71758e6f"} Feb 28 09:14:13 crc kubenswrapper[4996]: I0228 09:14:13.342272 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"02e43fcaf1e32104b093babde57c895a435eb8e935013328e1ffee5be20b3dec"} Feb 28 09:14:13 crc kubenswrapper[4996]: I0228 09:14:13.342308 4996 scope.go:117] "RemoveContainer" containerID="e3b6da15faf8b8661d31491d68582f55b569ea0ca1baae1efe37fa713b132293" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.353608 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm"] Feb 28 09:14:15 crc kubenswrapper[4996]: E0228 09:14:15.354117 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d695de3-cc18-423c-bcda-2370449f8479" containerName="oc" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.354130 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d695de3-cc18-423c-bcda-2370449f8479" containerName="oc" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.354220 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d695de3-cc18-423c-bcda-2370449f8479" containerName="oc" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.354916 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.357157 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.384755 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm"] Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.479874 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.479956 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8jg\" (UniqueName: \"kubernetes.io/projected/1f3f1b93-b4b8-4171-893a-284b4fc07448-kube-api-access-6h8jg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.480048 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.581475 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.581675 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8jg\" (UniqueName: \"kubernetes.io/projected/1f3f1b93-b4b8-4171-893a-284b4fc07448-kube-api-access-6h8jg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.582487 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.582760 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.581891 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.612323 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8jg\" (UniqueName: \"kubernetes.io/projected/1f3f1b93-b4b8-4171-893a-284b4fc07448-kube-api-access-6h8jg\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:15 crc kubenswrapper[4996]: I0228 09:14:15.683166 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:16 crc kubenswrapper[4996]: I0228 09:14:16.169942 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm"] Feb 28 09:14:16 crc kubenswrapper[4996]: W0228 09:14:16.182055 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f3f1b93_b4b8_4171_893a_284b4fc07448.slice/crio-c5324f9cc49c635319b18830f7aac8412810fe08e34b3bfbb0293d8abe00d7f8 WatchSource:0}: Error finding container c5324f9cc49c635319b18830f7aac8412810fe08e34b3bfbb0293d8abe00d7f8: Status 404 returned error can't find the container with id c5324f9cc49c635319b18830f7aac8412810fe08e34b3bfbb0293d8abe00d7f8 Feb 28 09:14:16 crc kubenswrapper[4996]: I0228 09:14:16.362084 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" event={"ID":"1f3f1b93-b4b8-4171-893a-284b4fc07448","Type":"ContainerStarted","Data":"7f97bd088b38cbf10c9349f13ff4a17e84cf26a526462a55288ccebb8e0498b2"} Feb 28 09:14:16 crc kubenswrapper[4996]: I0228 09:14:16.362141 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" event={"ID":"1f3f1b93-b4b8-4171-893a-284b4fc07448","Type":"ContainerStarted","Data":"c5324f9cc49c635319b18830f7aac8412810fe08e34b3bfbb0293d8abe00d7f8"} Feb 28 09:14:17 crc kubenswrapper[4996]: I0228 09:14:17.369240 4996 generic.go:334] "Generic (PLEG): container finished" podID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerID="7f97bd088b38cbf10c9349f13ff4a17e84cf26a526462a55288ccebb8e0498b2" exitCode=0 Feb 28 09:14:17 crc kubenswrapper[4996]: I0228 09:14:17.369363 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" event={"ID":"1f3f1b93-b4b8-4171-893a-284b4fc07448","Type":"ContainerDied","Data":"7f97bd088b38cbf10c9349f13ff4a17e84cf26a526462a55288ccebb8e0498b2"} Feb 28 09:14:17 crc kubenswrapper[4996]: I0228 09:14:17.720153 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vp8jf"] Feb 28 09:14:17 crc kubenswrapper[4996]: I0228 09:14:17.721526 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:17 crc kubenswrapper[4996]: I0228 09:14:17.747606 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vp8jf"] Feb 28 09:14:17 crc kubenswrapper[4996]: I0228 09:14:17.914421 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zncb\" (UniqueName: \"kubernetes.io/projected/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-kube-api-access-7zncb\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:17 crc kubenswrapper[4996]: I0228 09:14:17.914916 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-catalog-content\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:17 crc kubenswrapper[4996]: I0228 09:14:17.914986 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-utilities\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:18 crc kubenswrapper[4996]: I0228 09:14:18.016047 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zncb\" (UniqueName: \"kubernetes.io/projected/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-kube-api-access-7zncb\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:18 crc kubenswrapper[4996]: I0228 09:14:18.016095 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-catalog-content\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:18 crc kubenswrapper[4996]: I0228 09:14:18.016120 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-utilities\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:18 crc kubenswrapper[4996]: I0228 09:14:18.016722 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-utilities\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:18 crc kubenswrapper[4996]: I0228 09:14:18.016901 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-catalog-content\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:18 crc kubenswrapper[4996]: I0228 09:14:18.041780 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zncb\" (UniqueName: \"kubernetes.io/projected/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-kube-api-access-7zncb\") pod \"redhat-operators-vp8jf\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:18 crc kubenswrapper[4996]: I0228 09:14:18.070766 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:18 crc kubenswrapper[4996]: I0228 09:14:18.517417 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vp8jf"] Feb 28 09:14:18 crc kubenswrapper[4996]: W0228 09:14:18.526631 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bb32542_4ef9_4f63_af6e_a13cf8e290cd.slice/crio-9f69c330b6ef3229539d9cf3d1d6d3091a73eb4b51e4b2d24b892355c4aadd24 WatchSource:0}: Error finding container 9f69c330b6ef3229539d9cf3d1d6d3091a73eb4b51e4b2d24b892355c4aadd24: Status 404 returned error can't find the container with id 9f69c330b6ef3229539d9cf3d1d6d3091a73eb4b51e4b2d24b892355c4aadd24 Feb 28 09:14:19 crc kubenswrapper[4996]: I0228 09:14:19.383156 4996 generic.go:334] "Generic (PLEG): container finished" podID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerID="7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580" exitCode=0 Feb 28 09:14:19 crc kubenswrapper[4996]: I0228 09:14:19.383210 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp8jf" event={"ID":"5bb32542-4ef9-4f63-af6e-a13cf8e290cd","Type":"ContainerDied","Data":"7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580"} Feb 28 09:14:19 crc kubenswrapper[4996]: I0228 09:14:19.383471 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp8jf" event={"ID":"5bb32542-4ef9-4f63-af6e-a13cf8e290cd","Type":"ContainerStarted","Data":"9f69c330b6ef3229539d9cf3d1d6d3091a73eb4b51e4b2d24b892355c4aadd24"} Feb 28 09:14:19 crc kubenswrapper[4996]: I0228 09:14:19.385844 4996 generic.go:334] "Generic (PLEG): container finished" podID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerID="a032dc5f69b433eff724b3623bc5982659b540ce15b77133fba83eac25096528" exitCode=0 Feb 28 09:14:19 crc kubenswrapper[4996]: I0228 09:14:19.385866 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" event={"ID":"1f3f1b93-b4b8-4171-893a-284b4fc07448","Type":"ContainerDied","Data":"a032dc5f69b433eff724b3623bc5982659b540ce15b77133fba83eac25096528"} Feb 28 09:14:20 crc kubenswrapper[4996]: I0228 09:14:20.394000 4996 generic.go:334] "Generic (PLEG): container finished" podID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerID="c616c31fb55271ff5d4296df7be9745d1706f6b996fe680eb4d1c45c4937a334" exitCode=0 Feb 28 09:14:20 crc kubenswrapper[4996]: I0228 09:14:20.394143 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" event={"ID":"1f3f1b93-b4b8-4171-893a-284b4fc07448","Type":"ContainerDied","Data":"c616c31fb55271ff5d4296df7be9745d1706f6b996fe680eb4d1c45c4937a334"} Feb 28 09:14:20 crc kubenswrapper[4996]: I0228 09:14:20.398093 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp8jf" event={"ID":"5bb32542-4ef9-4f63-af6e-a13cf8e290cd","Type":"ContainerStarted","Data":"bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5"} Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.409137 4996 generic.go:334] "Generic (PLEG): container finished" podID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerID="bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5" exitCode=0 Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.409209 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp8jf" event={"ID":"5bb32542-4ef9-4f63-af6e-a13cf8e290cd","Type":"ContainerDied","Data":"bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5"} Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.688823 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.864018 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-util\") pod \"1f3f1b93-b4b8-4171-893a-284b4fc07448\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.864426 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h8jg\" (UniqueName: \"kubernetes.io/projected/1f3f1b93-b4b8-4171-893a-284b4fc07448-kube-api-access-6h8jg\") pod \"1f3f1b93-b4b8-4171-893a-284b4fc07448\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.864453 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-bundle\") pod \"1f3f1b93-b4b8-4171-893a-284b4fc07448\" (UID: \"1f3f1b93-b4b8-4171-893a-284b4fc07448\") " Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.864901 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-bundle" (OuterVolumeSpecName: "bundle") pod "1f3f1b93-b4b8-4171-893a-284b4fc07448" (UID: "1f3f1b93-b4b8-4171-893a-284b4fc07448"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.865335 4996 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.872172 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3f1b93-b4b8-4171-893a-284b4fc07448-kube-api-access-6h8jg" (OuterVolumeSpecName: "kube-api-access-6h8jg") pod "1f3f1b93-b4b8-4171-893a-284b4fc07448" (UID: "1f3f1b93-b4b8-4171-893a-284b4fc07448"). InnerVolumeSpecName "kube-api-access-6h8jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:14:21 crc kubenswrapper[4996]: I0228 09:14:21.966948 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h8jg\" (UniqueName: \"kubernetes.io/projected/1f3f1b93-b4b8-4171-893a-284b4fc07448-kube-api-access-6h8jg\") on node \"crc\" DevicePath \"\"" Feb 28 09:14:22 crc kubenswrapper[4996]: I0228 09:14:22.120340 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-util" (OuterVolumeSpecName: "util") pod "1f3f1b93-b4b8-4171-893a-284b4fc07448" (UID: "1f3f1b93-b4b8-4171-893a-284b4fc07448"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:14:22 crc kubenswrapper[4996]: I0228 09:14:22.169047 4996 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f3f1b93-b4b8-4171-893a-284b4fc07448-util\") on node \"crc\" DevicePath \"\"" Feb 28 09:14:22 crc kubenswrapper[4996]: I0228 09:14:22.420504 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp8jf" event={"ID":"5bb32542-4ef9-4f63-af6e-a13cf8e290cd","Type":"ContainerStarted","Data":"495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990"} Feb 28 09:14:22 crc kubenswrapper[4996]: I0228 09:14:22.425167 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" event={"ID":"1f3f1b93-b4b8-4171-893a-284b4fc07448","Type":"ContainerDied","Data":"c5324f9cc49c635319b18830f7aac8412810fe08e34b3bfbb0293d8abe00d7f8"} Feb 28 09:14:22 crc kubenswrapper[4996]: I0228 09:14:22.425201 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5324f9cc49c635319b18830f7aac8412810fe08e34b3bfbb0293d8abe00d7f8" Feb 28 09:14:22 crc kubenswrapper[4996]: I0228 09:14:22.425250 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm" Feb 28 09:14:22 crc kubenswrapper[4996]: I0228 09:14:22.443677 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vp8jf" podStartSLOduration=2.972341458 podStartE2EDuration="5.443655971s" podCreationTimestamp="2026-02-28 09:14:17 +0000 UTC" firstStartedPulling="2026-02-28 09:14:19.38526179 +0000 UTC m=+823.076064621" lastFinishedPulling="2026-02-28 09:14:21.856576323 +0000 UTC m=+825.547379134" observedRunningTime="2026-02-28 09:14:22.439822426 +0000 UTC m=+826.130625247" watchObservedRunningTime="2026-02-28 09:14:22.443655971 +0000 UTC m=+826.134458792" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.667582 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-s685x"] Feb 28 09:14:26 crc kubenswrapper[4996]: E0228 09:14:26.668089 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerName="extract" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.668102 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerName="extract" Feb 28 09:14:26 crc kubenswrapper[4996]: E0228 09:14:26.668115 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerName="pull" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.668120 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerName="pull" Feb 28 09:14:26 crc kubenswrapper[4996]: E0228 09:14:26.668129 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerName="util" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.668135 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerName="util" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.668229 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3f1b93-b4b8-4171-893a-284b4fc07448" containerName="extract" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.668644 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-s685x" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.670370 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-c6rsl" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.677325 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.678080 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.682419 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-s685x"] Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.832206 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz9qv\" (UniqueName: \"kubernetes.io/projected/b1a6c76e-564f-4300-ab4b-001eade60a3c-kube-api-access-mz9qv\") pod \"nmstate-operator-75c5dccd6c-s685x\" (UID: \"b1a6c76e-564f-4300-ab4b-001eade60a3c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-s685x" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.933960 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz9qv\" (UniqueName: \"kubernetes.io/projected/b1a6c76e-564f-4300-ab4b-001eade60a3c-kube-api-access-mz9qv\") pod \"nmstate-operator-75c5dccd6c-s685x\" (UID: \"b1a6c76e-564f-4300-ab4b-001eade60a3c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-s685x" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.958326 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz9qv\" (UniqueName: \"kubernetes.io/projected/b1a6c76e-564f-4300-ab4b-001eade60a3c-kube-api-access-mz9qv\") pod \"nmstate-operator-75c5dccd6c-s685x\" (UID: \"b1a6c76e-564f-4300-ab4b-001eade60a3c\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-s685x" Feb 28 09:14:26 crc kubenswrapper[4996]: I0228 09:14:26.981858 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-s685x" Feb 28 09:14:27 crc kubenswrapper[4996]: I0228 09:14:27.240101 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-s685x"] Feb 28 09:14:27 crc kubenswrapper[4996]: I0228 09:14:27.465279 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-s685x" event={"ID":"b1a6c76e-564f-4300-ab4b-001eade60a3c","Type":"ContainerStarted","Data":"851e66aa24f2eefab3d3f0400cbfacfa889256e2e88a6fcab2e48fe1a07a1d95"} Feb 28 09:14:28 crc kubenswrapper[4996]: I0228 09:14:28.071175 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:28 crc kubenswrapper[4996]: I0228 09:14:28.071541 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:29 crc kubenswrapper[4996]: I0228 09:14:29.132075 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vp8jf" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="registry-server" probeResult="failure" output=< Feb 28 09:14:29 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 09:14:29 crc kubenswrapper[4996]: > Feb 28 09:14:30 crc kubenswrapper[4996]: I0228 09:14:30.488929 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-s685x" event={"ID":"b1a6c76e-564f-4300-ab4b-001eade60a3c","Type":"ContainerStarted","Data":"710b785e46ecf97120c7ccdd6dbf67a6a9ff18081edf9ec4adcf264031ba37ec"} Feb 28 09:14:30 crc kubenswrapper[4996]: I0228 09:14:30.522104 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-s685x" podStartSLOduration=2.363193967 podStartE2EDuration="4.522080692s" podCreationTimestamp="2026-02-28 09:14:26 +0000 UTC" firstStartedPulling="2026-02-28 09:14:27.25469623 +0000 UTC m=+830.945499041" lastFinishedPulling="2026-02-28 09:14:29.413582955 +0000 UTC m=+833.104385766" observedRunningTime="2026-02-28 09:14:30.51547031 +0000 UTC m=+834.206273121" watchObservedRunningTime="2026-02-28 09:14:30.522080692 +0000 UTC m=+834.212883523" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.695470 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-97f4k"] Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.696626 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.701673 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-n8dns" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.711183 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf"] Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.712458 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.716267 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.722918 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-97f4k"] Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.734743 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jmt82"] Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.735429 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.760999 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf"] Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.848740 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql"] Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.849362 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.866804 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.867024 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.867172 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-j2gb4" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.869238 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a536c0f7-7da6-4af1-91a2-78ef301ca956-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-tbjkf\" (UID: \"a536c0f7-7da6-4af1-91a2-78ef301ca956\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.869376 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/24a4ec72-da59-4afb-93a8-07f88c99753f-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.869609 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-dbus-socket\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.869647 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csdgj\" (UniqueName: \"kubernetes.io/projected/6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd-kube-api-access-csdgj\") pod \"nmstate-metrics-69594cc75-97f4k\" (UID: \"6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.869684 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-nmstate-lock\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.869731 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6nfg\" (UniqueName: \"kubernetes.io/projected/24a4ec72-da59-4afb-93a8-07f88c99753f-kube-api-access-q6nfg\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.870002 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp986\" (UniqueName: \"kubernetes.io/projected/a536c0f7-7da6-4af1-91a2-78ef301ca956-kube-api-access-bp986\") pod \"nmstate-webhook-786f45cff4-tbjkf\" (UID: \"a536c0f7-7da6-4af1-91a2-78ef301ca956\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.870100 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-ovs-socket\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.870126 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drf9\" (UniqueName: \"kubernetes.io/projected/ce37c1b2-44ca-4001-b29b-518b02279f50-kube-api-access-8drf9\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.870201 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/24a4ec72-da59-4afb-93a8-07f88c99753f-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.871993 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql"] Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972100 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a536c0f7-7da6-4af1-91a2-78ef301ca956-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-tbjkf\" (UID: \"a536c0f7-7da6-4af1-91a2-78ef301ca956\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972140 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/24a4ec72-da59-4afb-93a8-07f88c99753f-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972164 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-dbus-socket\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972185 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csdgj\" (UniqueName: \"kubernetes.io/projected/6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd-kube-api-access-csdgj\") pod \"nmstate-metrics-69594cc75-97f4k\" (UID: \"6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972492 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-nmstate-lock\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: E0228 09:14:35.972333 4996 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972518 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6nfg\" (UniqueName: \"kubernetes.io/projected/24a4ec72-da59-4afb-93a8-07f88c99753f-kube-api-access-q6nfg\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: E0228 09:14:35.972551 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/24a4ec72-da59-4afb-93a8-07f88c99753f-plugin-serving-cert podName:24a4ec72-da59-4afb-93a8-07f88c99753f nodeName:}" failed. No retries permitted until 2026-02-28 09:14:36.472535416 +0000 UTC m=+840.163338227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/24a4ec72-da59-4afb-93a8-07f88c99753f-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-8nvql" (UID: "24a4ec72-da59-4afb-93a8-07f88c99753f") : secret "plugin-serving-cert" not found Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972844 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp986\" (UniqueName: \"kubernetes.io/projected/a536c0f7-7da6-4af1-91a2-78ef301ca956-kube-api-access-bp986\") pod \"nmstate-webhook-786f45cff4-tbjkf\" (UID: \"a536c0f7-7da6-4af1-91a2-78ef301ca956\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972556 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-nmstate-lock\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.972447 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-dbus-socket\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.973075 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-ovs-socket\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.973104 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8drf9\" (UniqueName: \"kubernetes.io/projected/ce37c1b2-44ca-4001-b29b-518b02279f50-kube-api-access-8drf9\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.973196 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/24a4ec72-da59-4afb-93a8-07f88c99753f-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.973151 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ce37c1b2-44ca-4001-b29b-518b02279f50-ovs-socket\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.974120 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/24a4ec72-da59-4afb-93a8-07f88c99753f-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.985885 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a536c0f7-7da6-4af1-91a2-78ef301ca956-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-tbjkf\" (UID: \"a536c0f7-7da6-4af1-91a2-78ef301ca956\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.990481 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6nfg\" (UniqueName: \"kubernetes.io/projected/24a4ec72-da59-4afb-93a8-07f88c99753f-kube-api-access-q6nfg\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:35 crc kubenswrapper[4996]: I0228 09:14:35.998891 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csdgj\" (UniqueName: \"kubernetes.io/projected/6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd-kube-api-access-csdgj\") pod \"nmstate-metrics-69594cc75-97f4k\" (UID: \"6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.004402 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp986\" (UniqueName: \"kubernetes.io/projected/a536c0f7-7da6-4af1-91a2-78ef301ca956-kube-api-access-bp986\") pod \"nmstate-webhook-786f45cff4-tbjkf\" (UID: \"a536c0f7-7da6-4af1-91a2-78ef301ca956\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.025380 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.030689 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8drf9\" (UniqueName: \"kubernetes.io/projected/ce37c1b2-44ca-4001-b29b-518b02279f50-kube-api-access-8drf9\") pod \"nmstate-handler-jmt82\" (UID: \"ce37c1b2-44ca-4001-b29b-518b02279f50\") " pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.036275 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.051965 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.107960 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-85fbbb4bfd-qt6gn"] Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.108772 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.136567 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85fbbb4bfd-qt6gn"] Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.181926 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-config\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.182157 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-service-ca\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.182211 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vjb\" (UniqueName: \"kubernetes.io/projected/6beeb4a5-f1e8-4782-b571-32282ce3b316-kube-api-access-g6vjb\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.182243 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-oauth-config\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.182262 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-trusted-ca-bundle\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.182301 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-oauth-serving-cert\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.182331 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-serving-cert\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.283413 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-serving-cert\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.283467 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-config\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.283496 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-service-ca\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.284343 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-config\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.284450 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-service-ca\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.284538 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vjb\" (UniqueName: \"kubernetes.io/projected/6beeb4a5-f1e8-4782-b571-32282ce3b316-kube-api-access-g6vjb\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.284585 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-oauth-config\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.284814 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-trusted-ca-bundle\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.287312 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-oauth-serving-cert\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.293654 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-oauth-serving-cert\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.293876 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6beeb4a5-f1e8-4782-b571-32282ce3b316-trusted-ca-bundle\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.300072 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-serving-cert\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.300105 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6beeb4a5-f1e8-4782-b571-32282ce3b316-console-oauth-config\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.305087 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vjb\" (UniqueName: \"kubernetes.io/projected/6beeb4a5-f1e8-4782-b571-32282ce3b316-kube-api-access-g6vjb\") pod \"console-85fbbb4bfd-qt6gn\" (UID: \"6beeb4a5-f1e8-4782-b571-32282ce3b316\") " pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.322815 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf"] Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.364641 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-97f4k"] Feb 28 09:14:36 crc kubenswrapper[4996]: W0228 09:14:36.370761 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b7eff8e_5a88_4cbc_aec4_1bf997fe31cd.slice/crio-97c3a82c8b50e998af2d7292946d8a744a33ee154828c92aba1ce42b6bbf1344 WatchSource:0}: Error finding container 97c3a82c8b50e998af2d7292946d8a744a33ee154828c92aba1ce42b6bbf1344: Status 404 returned error can't find the container with id 97c3a82c8b50e998af2d7292946d8a744a33ee154828c92aba1ce42b6bbf1344 Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.452982 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.490249 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/24a4ec72-da59-4afb-93a8-07f88c99753f-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.493552 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/24a4ec72-da59-4afb-93a8-07f88c99753f-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-8nvql\" (UID: \"24a4ec72-da59-4afb-93a8-07f88c99753f\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.546913 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jmt82" event={"ID":"ce37c1b2-44ca-4001-b29b-518b02279f50","Type":"ContainerStarted","Data":"7bdcf99852dad0b74c11af08dd5b9ec72ce189439d04e5f01c305cdda22babc9"} Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.549423 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" event={"ID":"a536c0f7-7da6-4af1-91a2-78ef301ca956","Type":"ContainerStarted","Data":"da7db020d569132aa3287c06a7563294d0707b43e4fbd2220313c6f39c8a0fdc"} Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.550309 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" event={"ID":"6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd","Type":"ContainerStarted","Data":"97c3a82c8b50e998af2d7292946d8a744a33ee154828c92aba1ce42b6bbf1344"} Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.779360 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" Feb 28 09:14:36 crc kubenswrapper[4996]: I0228 09:14:36.844089 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-85fbbb4bfd-qt6gn"] Feb 28 09:14:36 crc kubenswrapper[4996]: W0228 09:14:36.859484 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6beeb4a5_f1e8_4782_b571_32282ce3b316.slice/crio-66daa56d8906c0fff5a57bf5a8d35157151215898abb5aed72a19ceadc7ad263 WatchSource:0}: Error finding container 66daa56d8906c0fff5a57bf5a8d35157151215898abb5aed72a19ceadc7ad263: Status 404 returned error can't find the container with id 66daa56d8906c0fff5a57bf5a8d35157151215898abb5aed72a19ceadc7ad263 Feb 28 09:14:37 crc kubenswrapper[4996]: I0228 09:14:37.044190 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql"] Feb 28 09:14:37 crc kubenswrapper[4996]: I0228 09:14:37.560076 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85fbbb4bfd-qt6gn" event={"ID":"6beeb4a5-f1e8-4782-b571-32282ce3b316","Type":"ContainerStarted","Data":"a9049fb15da56c844279627986e7fce7b00cdc2941b86d6eb30dda1c6ee4364e"} Feb 28 09:14:37 crc kubenswrapper[4996]: I0228 09:14:37.560547 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-85fbbb4bfd-qt6gn" event={"ID":"6beeb4a5-f1e8-4782-b571-32282ce3b316","Type":"ContainerStarted","Data":"66daa56d8906c0fff5a57bf5a8d35157151215898abb5aed72a19ceadc7ad263"} Feb 28 09:14:37 crc kubenswrapper[4996]: I0228 09:14:37.561515 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" event={"ID":"24a4ec72-da59-4afb-93a8-07f88c99753f","Type":"ContainerStarted","Data":"7babdd53802b062781272cd4c57d4417cd54a5d6142b473a25eeff1f6dc55e18"} Feb 28 09:14:37 crc kubenswrapper[4996]: I0228 09:14:37.580771 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-85fbbb4bfd-qt6gn" podStartSLOduration=1.5807507410000001 podStartE2EDuration="1.580750741s" podCreationTimestamp="2026-02-28 09:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:14:37.577348916 +0000 UTC m=+841.268151727" watchObservedRunningTime="2026-02-28 09:14:37.580750741 +0000 UTC m=+841.271553562" Feb 28 09:14:37 crc kubenswrapper[4996]: I0228 09:14:37.879678 4996 scope.go:117] "RemoveContainer" containerID="fcf0e718e7a58ac348c08fd0f19de42ed17d7a528175690d536f90d6b2190fdb" Feb 28 09:14:38 crc kubenswrapper[4996]: I0228 09:14:38.110377 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:38 crc kubenswrapper[4996]: I0228 09:14:38.157677 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:38 crc kubenswrapper[4996]: I0228 09:14:38.340982 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vp8jf"] Feb 28 09:14:39 crc kubenswrapper[4996]: I0228 09:14:39.578510 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" event={"ID":"a536c0f7-7da6-4af1-91a2-78ef301ca956","Type":"ContainerStarted","Data":"021440d88285185699832821646e56689d4eb7e3f48ed850adbc13bfbb6b5579"} Feb 28 09:14:39 crc kubenswrapper[4996]: I0228 09:14:39.579117 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:39 crc kubenswrapper[4996]: I0228 09:14:39.580438 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vp8jf" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="registry-server" containerID="cri-o://495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990" gracePeriod=2 Feb 28 09:14:39 crc kubenswrapper[4996]: I0228 09:14:39.580849 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" event={"ID":"6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd","Type":"ContainerStarted","Data":"e9a3f7dafa67aa7c2c76acf765140186cf13ee7451c59d98ff6b393b29da938b"} Feb 28 09:14:39 crc kubenswrapper[4996]: I0228 09:14:39.599051 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" podStartSLOduration=1.93005932 podStartE2EDuration="4.599025376s" podCreationTimestamp="2026-02-28 09:14:35 +0000 UTC" firstStartedPulling="2026-02-28 09:14:36.334321269 +0000 UTC m=+840.025124080" lastFinishedPulling="2026-02-28 09:14:39.003287325 +0000 UTC m=+842.694090136" observedRunningTime="2026-02-28 09:14:39.597207923 +0000 UTC m=+843.288010744" watchObservedRunningTime="2026-02-28 09:14:39.599025376 +0000 UTC m=+843.289828207" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.145718 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.254506 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zncb\" (UniqueName: \"kubernetes.io/projected/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-kube-api-access-7zncb\") pod \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.254696 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-utilities\") pod \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.254740 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-catalog-content\") pod \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\" (UID: \"5bb32542-4ef9-4f63-af6e-a13cf8e290cd\") " Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.256162 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-utilities" (OuterVolumeSpecName: "utilities") pod "5bb32542-4ef9-4f63-af6e-a13cf8e290cd" (UID: "5bb32542-4ef9-4f63-af6e-a13cf8e290cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.266020 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-kube-api-access-7zncb" (OuterVolumeSpecName: "kube-api-access-7zncb") pod "5bb32542-4ef9-4f63-af6e-a13cf8e290cd" (UID: "5bb32542-4ef9-4f63-af6e-a13cf8e290cd"). InnerVolumeSpecName "kube-api-access-7zncb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.357902 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zncb\" (UniqueName: \"kubernetes.io/projected/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-kube-api-access-7zncb\") on node \"crc\" DevicePath \"\"" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.357993 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.388529 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bb32542-4ef9-4f63-af6e-a13cf8e290cd" (UID: "5bb32542-4ef9-4f63-af6e-a13cf8e290cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.459962 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bb32542-4ef9-4f63-af6e-a13cf8e290cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.589880 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" event={"ID":"24a4ec72-da59-4afb-93a8-07f88c99753f","Type":"ContainerStarted","Data":"9a8e56041f2ba80ddb259e99309add04b1dd8ad8b7161987c63db8517167dd69"} Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.593687 4996 generic.go:334] "Generic (PLEG): container finished" podID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerID="495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990" exitCode=0 Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.593784 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp8jf" event={"ID":"5bb32542-4ef9-4f63-af6e-a13cf8e290cd","Type":"ContainerDied","Data":"495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990"} Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.593849 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vp8jf" event={"ID":"5bb32542-4ef9-4f63-af6e-a13cf8e290cd","Type":"ContainerDied","Data":"9f69c330b6ef3229539d9cf3d1d6d3091a73eb4b51e4b2d24b892355c4aadd24"} Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.593875 4996 scope.go:117] "RemoveContainer" containerID="495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.593882 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vp8jf" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.628618 4996 scope.go:117] "RemoveContainer" containerID="bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.637997 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-8nvql" podStartSLOduration=2.723256665 podStartE2EDuration="5.637957674s" podCreationTimestamp="2026-02-28 09:14:35 +0000 UTC" firstStartedPulling="2026-02-28 09:14:37.05325821 +0000 UTC m=+840.744061041" lastFinishedPulling="2026-02-28 09:14:39.967959239 +0000 UTC m=+843.658762050" observedRunningTime="2026-02-28 09:14:40.604908847 +0000 UTC m=+844.295711738" watchObservedRunningTime="2026-02-28 09:14:40.637957674 +0000 UTC m=+844.328760505" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.642723 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vp8jf"] Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.648641 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vp8jf"] Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.665373 4996 scope.go:117] "RemoveContainer" containerID="7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.680366 4996 scope.go:117] "RemoveContainer" containerID="495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990" Feb 28 09:14:40 crc kubenswrapper[4996]: E0228 09:14:40.680819 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990\": container with ID starting with 495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990 not found: ID does not exist" containerID="495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.680895 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990"} err="failed to get container status \"495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990\": rpc error: code = NotFound desc = could not find container \"495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990\": container with ID starting with 495f1e1fbf27e18a53aa27938c06e7a62b0ae16c5f4ac4c07a01c1afb862a990 not found: ID does not exist" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.680959 4996 scope.go:117] "RemoveContainer" containerID="bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5" Feb 28 09:14:40 crc kubenswrapper[4996]: E0228 09:14:40.681479 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5\": container with ID starting with bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5 not found: ID does not exist" containerID="bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.681527 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5"} err="failed to get container status \"bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5\": rpc error: code = NotFound desc = could not find container \"bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5\": container with ID starting with bd88e05e26239ed1a71170db697e1bca6b555a36385a28f9cf1c7a6231d08db5 not found: ID does not exist" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.681562 4996 scope.go:117] "RemoveContainer" containerID="7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580" Feb 28 09:14:40 crc kubenswrapper[4996]: E0228 09:14:40.681938 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580\": container with ID starting with 7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580 not found: ID does not exist" containerID="7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580" Feb 28 09:14:40 crc kubenswrapper[4996]: I0228 09:14:40.682038 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580"} err="failed to get container status \"7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580\": rpc error: code = NotFound desc = could not find container \"7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580\": container with ID starting with 7d00eb8744ec9128b7e3565ba8cd32188431c5623e559742fefeb8f8215c2580 not found: ID does not exist" Feb 28 09:14:41 crc kubenswrapper[4996]: I0228 09:14:41.041876 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" path="/var/lib/kubelet/pods/5bb32542-4ef9-4f63-af6e-a13cf8e290cd/volumes" Feb 28 09:14:41 crc kubenswrapper[4996]: I0228 09:14:41.599251 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" event={"ID":"6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd","Type":"ContainerStarted","Data":"dace63226fa7c9ca367a07579f5715e54f954cd1802fbab3749ae25d779eebdc"} Feb 28 09:14:41 crc kubenswrapper[4996]: I0228 09:14:41.616416 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-97f4k" podStartSLOduration=1.648054665 podStartE2EDuration="6.616396801s" podCreationTimestamp="2026-02-28 09:14:35 +0000 UTC" firstStartedPulling="2026-02-28 09:14:36.373053152 +0000 UTC m=+840.063855963" lastFinishedPulling="2026-02-28 09:14:41.341395288 +0000 UTC m=+845.032198099" observedRunningTime="2026-02-28 09:14:41.614571718 +0000 UTC m=+845.305374549" watchObservedRunningTime="2026-02-28 09:14:41.616396801 +0000 UTC m=+845.307199632" Feb 28 09:14:46 crc kubenswrapper[4996]: I0228 09:14:46.454379 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:46 crc kubenswrapper[4996]: I0228 09:14:46.455569 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:46 crc kubenswrapper[4996]: I0228 09:14:46.459138 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:46 crc kubenswrapper[4996]: I0228 09:14:46.634736 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-85fbbb4bfd-qt6gn" Feb 28 09:14:46 crc kubenswrapper[4996]: I0228 09:14:46.677772 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v49dx"] Feb 28 09:14:56 crc kubenswrapper[4996]: I0228 09:14:56.044502 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-tbjkf" Feb 28 09:14:59 crc kubenswrapper[4996]: I0228 09:14:59.726695 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jmt82" event={"ID":"ce37c1b2-44ca-4001-b29b-518b02279f50","Type":"ContainerStarted","Data":"5813c0ee70f4c9b27b60c8faf06b9ea2dd4bf16f05595928a1ea34dcf81fdc1e"} Feb 28 09:14:59 crc kubenswrapper[4996]: I0228 09:14:59.727302 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:14:59 crc kubenswrapper[4996]: I0228 09:14:59.743151 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jmt82" podStartSLOduration=1.968449445 podStartE2EDuration="24.743134346s" podCreationTimestamp="2026-02-28 09:14:35 +0000 UTC" firstStartedPulling="2026-02-28 09:14:36.106242726 +0000 UTC m=+839.797045537" lastFinishedPulling="2026-02-28 09:14:58.880927617 +0000 UTC m=+862.571730438" observedRunningTime="2026-02-28 09:14:59.74208496 +0000 UTC m=+863.432887771" watchObservedRunningTime="2026-02-28 09:14:59.743134346 +0000 UTC m=+863.433937167" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.177558 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk"] Feb 28 09:15:00 crc kubenswrapper[4996]: E0228 09:15:00.177871 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="extract-utilities" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.177899 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="extract-utilities" Feb 28 09:15:00 crc kubenswrapper[4996]: E0228 09:15:00.177925 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="extract-content" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.177938 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="extract-content" Feb 28 09:15:00 crc kubenswrapper[4996]: E0228 09:15:00.177967 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="registry-server" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.177979 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="registry-server" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.178185 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb32542-4ef9-4f63-af6e-a13cf8e290cd" containerName="registry-server" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.178743 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.180966 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.181753 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.185032 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk"] Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.198426 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ps6g\" (UniqueName: \"kubernetes.io/projected/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-kube-api-access-7ps6g\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.198483 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-secret-volume\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.198529 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-config-volume\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.299376 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ps6g\" (UniqueName: \"kubernetes.io/projected/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-kube-api-access-7ps6g\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.299430 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-secret-volume\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.299455 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-config-volume\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.300337 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-config-volume\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.305684 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-secret-volume\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.326842 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ps6g\" (UniqueName: \"kubernetes.io/projected/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-kube-api-access-7ps6g\") pod \"collect-profiles-29537835-jnxrk\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.493854 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:00 crc kubenswrapper[4996]: I0228 09:15:00.920283 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk"] Feb 28 09:15:00 crc kubenswrapper[4996]: W0228 09:15:00.928776 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40cfa7d1_3c7a_4e8c_b127_14b0177fe785.slice/crio-f4bbda734e4693e0ecd2105ca3670f84456b2240caadb636713a317d0d089de9 WatchSource:0}: Error finding container f4bbda734e4693e0ecd2105ca3670f84456b2240caadb636713a317d0d089de9: Status 404 returned error can't find the container with id f4bbda734e4693e0ecd2105ca3670f84456b2240caadb636713a317d0d089de9 Feb 28 09:15:01 crc kubenswrapper[4996]: I0228 09:15:01.739804 4996 generic.go:334] "Generic (PLEG): container finished" podID="40cfa7d1-3c7a-4e8c-b127-14b0177fe785" containerID="7c5070893bec19de14dcc6cc41c7443e80176921805c56c1006155be074bff60" exitCode=0 Feb 28 09:15:01 crc kubenswrapper[4996]: I0228 09:15:01.739845 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" event={"ID":"40cfa7d1-3c7a-4e8c-b127-14b0177fe785","Type":"ContainerDied","Data":"7c5070893bec19de14dcc6cc41c7443e80176921805c56c1006155be074bff60"} Feb 28 09:15:01 crc kubenswrapper[4996]: I0228 09:15:01.739869 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" event={"ID":"40cfa7d1-3c7a-4e8c-b127-14b0177fe785","Type":"ContainerStarted","Data":"f4bbda734e4693e0ecd2105ca3670f84456b2240caadb636713a317d0d089de9"} Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.056168 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.135868 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-secret-volume\") pod \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.135945 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ps6g\" (UniqueName: \"kubernetes.io/projected/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-kube-api-access-7ps6g\") pod \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.135993 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-config-volume\") pod \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\" (UID: \"40cfa7d1-3c7a-4e8c-b127-14b0177fe785\") " Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.136746 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-config-volume" (OuterVolumeSpecName: "config-volume") pod "40cfa7d1-3c7a-4e8c-b127-14b0177fe785" (UID: "40cfa7d1-3c7a-4e8c-b127-14b0177fe785"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.140709 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40cfa7d1-3c7a-4e8c-b127-14b0177fe785" (UID: "40cfa7d1-3c7a-4e8c-b127-14b0177fe785"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.140828 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-kube-api-access-7ps6g" (OuterVolumeSpecName: "kube-api-access-7ps6g") pod "40cfa7d1-3c7a-4e8c-b127-14b0177fe785" (UID: "40cfa7d1-3c7a-4e8c-b127-14b0177fe785"). InnerVolumeSpecName "kube-api-access-7ps6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.236978 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.237043 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.237055 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ps6g\" (UniqueName: \"kubernetes.io/projected/40cfa7d1-3c7a-4e8c-b127-14b0177fe785-kube-api-access-7ps6g\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.757368 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" event={"ID":"40cfa7d1-3c7a-4e8c-b127-14b0177fe785","Type":"ContainerDied","Data":"f4bbda734e4693e0ecd2105ca3670f84456b2240caadb636713a317d0d089de9"} Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.757743 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4bbda734e4693e0ecd2105ca3670f84456b2240caadb636713a317d0d089de9" Feb 28 09:15:03 crc kubenswrapper[4996]: I0228 09:15:03.757413 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk" Feb 28 09:15:06 crc kubenswrapper[4996]: I0228 09:15:06.083832 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jmt82" Feb 28 09:15:11 crc kubenswrapper[4996]: I0228 09:15:11.740237 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-v49dx" podUID="ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" containerName="console" containerID="cri-o://a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e" gracePeriod=15 Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.116744 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v49dx_ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43/console/0.log" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.117133 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.254944 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-oauth-config\") pod \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.254999 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-config\") pod \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.255179 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-service-ca\") pod \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.255329 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-oauth-serving-cert\") pod \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.255402 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-serving-cert\") pod \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.256202 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckpzl\" (UniqueName: \"kubernetes.io/projected/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-kube-api-access-ckpzl\") pod \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.256699 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-trusted-ca-bundle\") pod \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\" (UID: \"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43\") " Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.256065 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-config" (OuterVolumeSpecName: "console-config") pod "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" (UID: "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.256080 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" (UID: "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.256119 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-service-ca" (OuterVolumeSpecName: "service-ca") pod "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" (UID: "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.257253 4996 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.257279 4996 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.257281 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" (UID: "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.257294 4996 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.262887 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-kube-api-access-ckpzl" (OuterVolumeSpecName: "kube-api-access-ckpzl") pod "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" (UID: "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43"). InnerVolumeSpecName "kube-api-access-ckpzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.266151 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" (UID: "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.266769 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" (UID: "ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.357866 4996 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.357897 4996 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.357910 4996 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.357922 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckpzl\" (UniqueName: \"kubernetes.io/projected/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43-kube-api-access-ckpzl\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.830960 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-v49dx_ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43/console/0.log" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.831309 4996 generic.go:334] "Generic (PLEG): container finished" podID="ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" containerID="a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e" exitCode=2 Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.831338 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v49dx" event={"ID":"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43","Type":"ContainerDied","Data":"a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e"} Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.831368 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-v49dx" event={"ID":"ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43","Type":"ContainerDied","Data":"ac20d3c13e605c12ac83c5695f6ec3cf75dabe0fb31c24290076a22e25392493"} Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.831384 4996 scope.go:117] "RemoveContainer" containerID="a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.831389 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-v49dx" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.858219 4996 scope.go:117] "RemoveContainer" containerID="a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e" Feb 28 09:15:12 crc kubenswrapper[4996]: E0228 09:15:12.859249 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e\": container with ID starting with a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e not found: ID does not exist" containerID="a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.859338 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e"} err="failed to get container status \"a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e\": rpc error: code = NotFound desc = could not find container \"a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e\": container with ID starting with a53675eeb39210889a9028d361966615c42d75ade473b8f462137ff1c07aa59e not found: ID does not exist" Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.875611 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-v49dx"] Feb 28 09:15:12 crc kubenswrapper[4996]: I0228 09:15:12.892483 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-v49dx"] Feb 28 09:15:13 crc kubenswrapper[4996]: I0228 09:15:13.048791 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" path="/var/lib/kubelet/pods/ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43/volumes" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.706540 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct"] Feb 28 09:15:18 crc kubenswrapper[4996]: E0228 09:15:18.707529 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cfa7d1-3c7a-4e8c-b127-14b0177fe785" containerName="collect-profiles" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.707543 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cfa7d1-3c7a-4e8c-b127-14b0177fe785" containerName="collect-profiles" Feb 28 09:15:18 crc kubenswrapper[4996]: E0228 09:15:18.707565 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" containerName="console" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.707571 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" containerName="console" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.707695 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cfa7d1-3c7a-4e8c-b127-14b0177fe785" containerName="collect-profiles" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.707736 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7aeea0-dc85-4dfb-ab73-b7e1aef91a43" containerName="console" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.708686 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.710528 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.716130 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct"] Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.842191 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.842493 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqpwl\" (UniqueName: \"kubernetes.io/projected/45a76a31-3be4-4492-93bc-3ceb560d1743-kube-api-access-kqpwl\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.842532 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.943888 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.943974 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.944115 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqpwl\" (UniqueName: \"kubernetes.io/projected/45a76a31-3be4-4492-93bc-3ceb560d1743-kube-api-access-kqpwl\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.944761 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.944786 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:18 crc kubenswrapper[4996]: I0228 09:15:18.968799 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqpwl\" (UniqueName: \"kubernetes.io/projected/45a76a31-3be4-4492-93bc-3ceb560d1743-kube-api-access-kqpwl\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:19 crc kubenswrapper[4996]: I0228 09:15:19.023230 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:19 crc kubenswrapper[4996]: I0228 09:15:19.254399 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct"] Feb 28 09:15:19 crc kubenswrapper[4996]: W0228 09:15:19.259268 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45a76a31_3be4_4492_93bc_3ceb560d1743.slice/crio-94cfd61b38a556db0ff1319d83d5b5868f7485f583eddf90201f02bee8f6a367 WatchSource:0}: Error finding container 94cfd61b38a556db0ff1319d83d5b5868f7485f583eddf90201f02bee8f6a367: Status 404 returned error can't find the container with id 94cfd61b38a556db0ff1319d83d5b5868f7485f583eddf90201f02bee8f6a367 Feb 28 09:15:19 crc kubenswrapper[4996]: I0228 09:15:19.874402 4996 generic.go:334] "Generic (PLEG): container finished" podID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerID="deda595b30c22afa9f921ba46846b8658d1ddd1942eaeb2ee9d6c9087b0074b0" exitCode=0 Feb 28 09:15:19 crc kubenswrapper[4996]: I0228 09:15:19.874442 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" event={"ID":"45a76a31-3be4-4492-93bc-3ceb560d1743","Type":"ContainerDied","Data":"deda595b30c22afa9f921ba46846b8658d1ddd1942eaeb2ee9d6c9087b0074b0"} Feb 28 09:15:19 crc kubenswrapper[4996]: I0228 09:15:19.874466 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" event={"ID":"45a76a31-3be4-4492-93bc-3ceb560d1743","Type":"ContainerStarted","Data":"94cfd61b38a556db0ff1319d83d5b5868f7485f583eddf90201f02bee8f6a367"} Feb 28 09:15:19 crc kubenswrapper[4996]: I0228 09:15:19.876379 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:15:21 crc kubenswrapper[4996]: I0228 09:15:21.890202 4996 generic.go:334] "Generic (PLEG): container finished" podID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerID="018659a205a54fa52c59622ef7a090983c06af22a2fdec0152f81744bb814793" exitCode=0 Feb 28 09:15:21 crc kubenswrapper[4996]: I0228 09:15:21.890246 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" event={"ID":"45a76a31-3be4-4492-93bc-3ceb560d1743","Type":"ContainerDied","Data":"018659a205a54fa52c59622ef7a090983c06af22a2fdec0152f81744bb814793"} Feb 28 09:15:22 crc kubenswrapper[4996]: I0228 09:15:22.899734 4996 generic.go:334] "Generic (PLEG): container finished" podID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerID="4f9d1f4f7b98bb6daf3679cd1121c5389aa1a90f10df40ffea2d90d754530a9b" exitCode=0 Feb 28 09:15:22 crc kubenswrapper[4996]: I0228 09:15:22.899814 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" event={"ID":"45a76a31-3be4-4492-93bc-3ceb560d1743","Type":"ContainerDied","Data":"4f9d1f4f7b98bb6daf3679cd1121c5389aa1a90f10df40ffea2d90d754530a9b"} Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.168529 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.316606 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-util\") pod \"45a76a31-3be4-4492-93bc-3ceb560d1743\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.316714 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-bundle\") pod \"45a76a31-3be4-4492-93bc-3ceb560d1743\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.316768 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqpwl\" (UniqueName: \"kubernetes.io/projected/45a76a31-3be4-4492-93bc-3ceb560d1743-kube-api-access-kqpwl\") pod \"45a76a31-3be4-4492-93bc-3ceb560d1743\" (UID: \"45a76a31-3be4-4492-93bc-3ceb560d1743\") " Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.317962 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-bundle" (OuterVolumeSpecName: "bundle") pod "45a76a31-3be4-4492-93bc-3ceb560d1743" (UID: "45a76a31-3be4-4492-93bc-3ceb560d1743"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.322503 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45a76a31-3be4-4492-93bc-3ceb560d1743-kube-api-access-kqpwl" (OuterVolumeSpecName: "kube-api-access-kqpwl") pod "45a76a31-3be4-4492-93bc-3ceb560d1743" (UID: "45a76a31-3be4-4492-93bc-3ceb560d1743"). InnerVolumeSpecName "kube-api-access-kqpwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.329602 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-util" (OuterVolumeSpecName: "util") pod "45a76a31-3be4-4492-93bc-3ceb560d1743" (UID: "45a76a31-3be4-4492-93bc-3ceb560d1743"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.418440 4996 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-util\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.418471 4996 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45a76a31-3be4-4492-93bc-3ceb560d1743-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.418482 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqpwl\" (UniqueName: \"kubernetes.io/projected/45a76a31-3be4-4492-93bc-3ceb560d1743-kube-api-access-kqpwl\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.917898 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" event={"ID":"45a76a31-3be4-4492-93bc-3ceb560d1743","Type":"ContainerDied","Data":"94cfd61b38a556db0ff1319d83d5b5868f7485f583eddf90201f02bee8f6a367"} Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.917964 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94cfd61b38a556db0ff1319d83d5b5868f7485f583eddf90201f02bee8f6a367" Feb 28 09:15:24 crc kubenswrapper[4996]: I0228 09:15:24.917991 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.906612 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879"] Feb 28 09:15:33 crc kubenswrapper[4996]: E0228 09:15:33.907391 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerName="pull" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.907407 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerName="pull" Feb 28 09:15:33 crc kubenswrapper[4996]: E0228 09:15:33.907420 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerName="util" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.907428 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerName="util" Feb 28 09:15:33 crc kubenswrapper[4996]: E0228 09:15:33.907448 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerName="extract" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.907456 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerName="extract" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.907596 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="45a76a31-3be4-4492-93bc-3ceb560d1743" containerName="extract" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.908062 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.914540 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.914574 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.914852 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.919790 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bz4ss" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.920593 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 28 09:15:33 crc kubenswrapper[4996]: I0228 09:15:33.951739 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879"] Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.037959 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-webhook-cert\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.038044 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltnn\" (UniqueName: \"kubernetes.io/projected/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-kube-api-access-rltnn\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.038148 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-apiservice-cert\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.138861 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-apiservice-cert\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.138947 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-webhook-cert\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.138976 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rltnn\" (UniqueName: \"kubernetes.io/projected/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-kube-api-access-rltnn\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.145797 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-apiservice-cert\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.148254 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-webhook-cert\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.161159 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rltnn\" (UniqueName: \"kubernetes.io/projected/0e8f07d7-a80e-4587-979f-26d28ce2bf2f-kube-api-access-rltnn\") pod \"metallb-operator-controller-manager-b55d58fc7-vm879\" (UID: \"0e8f07d7-a80e-4587-979f-26d28ce2bf2f\") " pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.230582 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9"] Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.231234 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.232503 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.236980 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.236980 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.237103 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9gd9h" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.261105 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9"] Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.340717 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5146aecb-1f48-48a2-ae75-5289e11c2c06-apiservice-cert\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.340761 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8pc\" (UniqueName: \"kubernetes.io/projected/5146aecb-1f48-48a2-ae75-5289e11c2c06-kube-api-access-rs8pc\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.340783 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5146aecb-1f48-48a2-ae75-5289e11c2c06-webhook-cert\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.441684 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5146aecb-1f48-48a2-ae75-5289e11c2c06-apiservice-cert\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.441993 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8pc\" (UniqueName: \"kubernetes.io/projected/5146aecb-1f48-48a2-ae75-5289e11c2c06-kube-api-access-rs8pc\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.442033 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5146aecb-1f48-48a2-ae75-5289e11c2c06-webhook-cert\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.448632 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5146aecb-1f48-48a2-ae75-5289e11c2c06-webhook-cert\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.449217 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5146aecb-1f48-48a2-ae75-5289e11c2c06-apiservice-cert\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.457693 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8pc\" (UniqueName: \"kubernetes.io/projected/5146aecb-1f48-48a2-ae75-5289e11c2c06-kube-api-access-rs8pc\") pod \"metallb-operator-webhook-server-6f44cf5f86-2slf9\" (UID: \"5146aecb-1f48-48a2-ae75-5289e11c2c06\") " pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.465357 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879"] Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.585038 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.826572 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9"] Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.977827 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" event={"ID":"0e8f07d7-a80e-4587-979f-26d28ce2bf2f","Type":"ContainerStarted","Data":"ecfbddb956447245a46e34403033ce715a1e56b077f289214116c6c601569b04"} Feb 28 09:15:34 crc kubenswrapper[4996]: I0228 09:15:34.978858 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" event={"ID":"5146aecb-1f48-48a2-ae75-5289e11c2c06","Type":"ContainerStarted","Data":"ece5061d77207fdff93a3120d3d2ac0dc46e8c1947703063aebb4b13d1159a33"} Feb 28 09:15:38 crc kubenswrapper[4996]: I0228 09:15:38.008830 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" event={"ID":"0e8f07d7-a80e-4587-979f-26d28ce2bf2f","Type":"ContainerStarted","Data":"6ebfac149e732533c256b61cab3cdec18b70a02c87c6b29dd8ca23dc18582e39"} Feb 28 09:15:38 crc kubenswrapper[4996]: I0228 09:15:38.009520 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:15:38 crc kubenswrapper[4996]: I0228 09:15:38.034294 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" podStartSLOduration=1.9926816349999998 podStartE2EDuration="5.034280632s" podCreationTimestamp="2026-02-28 09:15:33 +0000 UTC" firstStartedPulling="2026-02-28 09:15:34.491284661 +0000 UTC m=+898.182087482" lastFinishedPulling="2026-02-28 09:15:37.532883668 +0000 UTC m=+901.223686479" observedRunningTime="2026-02-28 09:15:38.032773085 +0000 UTC m=+901.723575896" watchObservedRunningTime="2026-02-28 09:15:38.034280632 +0000 UTC m=+901.725083443" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.150165 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v4k6x"] Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.157471 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v4k6x"] Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.157567 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.213322 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-catalog-content\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.213397 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-utilities\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.213417 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt9dl\" (UniqueName: \"kubernetes.io/projected/888c3365-78ea-4863-bd8b-3ea74ccec4cd-kube-api-access-jt9dl\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.314402 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-utilities\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.314452 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt9dl\" (UniqueName: \"kubernetes.io/projected/888c3365-78ea-4863-bd8b-3ea74ccec4cd-kube-api-access-jt9dl\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.314520 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-catalog-content\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.314804 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-utilities\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.314891 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-catalog-content\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.333058 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt9dl\" (UniqueName: \"kubernetes.io/projected/888c3365-78ea-4863-bd8b-3ea74ccec4cd-kube-api-access-jt9dl\") pod \"certified-operators-v4k6x\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.492609 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:39 crc kubenswrapper[4996]: I0228 09:15:39.747996 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v4k6x"] Feb 28 09:15:40 crc kubenswrapper[4996]: I0228 09:15:40.019753 4996 generic.go:334] "Generic (PLEG): container finished" podID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerID="15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69" exitCode=0 Feb 28 09:15:40 crc kubenswrapper[4996]: I0228 09:15:40.019854 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4k6x" event={"ID":"888c3365-78ea-4863-bd8b-3ea74ccec4cd","Type":"ContainerDied","Data":"15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69"} Feb 28 09:15:40 crc kubenswrapper[4996]: I0228 09:15:40.020024 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4k6x" event={"ID":"888c3365-78ea-4863-bd8b-3ea74ccec4cd","Type":"ContainerStarted","Data":"f0aa8ff5d61a28cf6fb7aed48cf7ecc5836210e0d956075d62843fb624a48ac8"} Feb 28 09:15:40 crc kubenswrapper[4996]: I0228 09:15:40.022319 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" event={"ID":"5146aecb-1f48-48a2-ae75-5289e11c2c06","Type":"ContainerStarted","Data":"a115d97460e9069b0e9248b839563b8d4602121cc755413c72d94d84d32965c0"} Feb 28 09:15:40 crc kubenswrapper[4996]: I0228 09:15:40.022461 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:40 crc kubenswrapper[4996]: I0228 09:15:40.065285 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" podStartSLOduration=1.527902154 podStartE2EDuration="6.065261104s" podCreationTimestamp="2026-02-28 09:15:34 +0000 UTC" firstStartedPulling="2026-02-28 09:15:34.832353651 +0000 UTC m=+898.523156462" lastFinishedPulling="2026-02-28 09:15:39.369712591 +0000 UTC m=+903.060515412" observedRunningTime="2026-02-28 09:15:40.062895697 +0000 UTC m=+903.753698508" watchObservedRunningTime="2026-02-28 09:15:40.065261104 +0000 UTC m=+903.756063925" Feb 28 09:15:41 crc kubenswrapper[4996]: I0228 09:15:41.029761 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4k6x" event={"ID":"888c3365-78ea-4863-bd8b-3ea74ccec4cd","Type":"ContainerStarted","Data":"2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041"} Feb 28 09:15:42 crc kubenswrapper[4996]: I0228 09:15:42.035984 4996 generic.go:334] "Generic (PLEG): container finished" podID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerID="2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041" exitCode=0 Feb 28 09:15:42 crc kubenswrapper[4996]: I0228 09:15:42.036047 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4k6x" event={"ID":"888c3365-78ea-4863-bd8b-3ea74ccec4cd","Type":"ContainerDied","Data":"2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041"} Feb 28 09:15:43 crc kubenswrapper[4996]: I0228 09:15:43.042469 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4k6x" event={"ID":"888c3365-78ea-4863-bd8b-3ea74ccec4cd","Type":"ContainerStarted","Data":"b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3"} Feb 28 09:15:43 crc kubenswrapper[4996]: I0228 09:15:43.071477 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v4k6x" podStartSLOduration=1.6698171259999999 podStartE2EDuration="4.071462513s" podCreationTimestamp="2026-02-28 09:15:39 +0000 UTC" firstStartedPulling="2026-02-28 09:15:40.021034117 +0000 UTC m=+903.711836928" lastFinishedPulling="2026-02-28 09:15:42.422679494 +0000 UTC m=+906.113482315" observedRunningTime="2026-02-28 09:15:43.070468759 +0000 UTC m=+906.761271570" watchObservedRunningTime="2026-02-28 09:15:43.071462513 +0000 UTC m=+906.762265324" Feb 28 09:15:49 crc kubenswrapper[4996]: I0228 09:15:49.492786 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:49 crc kubenswrapper[4996]: I0228 09:15:49.493800 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:49 crc kubenswrapper[4996]: I0228 09:15:49.535663 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:50 crc kubenswrapper[4996]: I0228 09:15:50.117663 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:51 crc kubenswrapper[4996]: I0228 09:15:51.928518 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v4k6x"] Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.087808 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v4k6x" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerName="registry-server" containerID="cri-o://b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3" gracePeriod=2 Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.483546 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.600019 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-catalog-content\") pod \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.600080 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-utilities\") pod \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.600127 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt9dl\" (UniqueName: \"kubernetes.io/projected/888c3365-78ea-4863-bd8b-3ea74ccec4cd-kube-api-access-jt9dl\") pod \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\" (UID: \"888c3365-78ea-4863-bd8b-3ea74ccec4cd\") " Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.601666 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-utilities" (OuterVolumeSpecName: "utilities") pod "888c3365-78ea-4863-bd8b-3ea74ccec4cd" (UID: "888c3365-78ea-4863-bd8b-3ea74ccec4cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.611180 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/888c3365-78ea-4863-bd8b-3ea74ccec4cd-kube-api-access-jt9dl" (OuterVolumeSpecName: "kube-api-access-jt9dl") pod "888c3365-78ea-4863-bd8b-3ea74ccec4cd" (UID: "888c3365-78ea-4863-bd8b-3ea74ccec4cd"). InnerVolumeSpecName "kube-api-access-jt9dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.668354 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "888c3365-78ea-4863-bd8b-3ea74ccec4cd" (UID: "888c3365-78ea-4863-bd8b-3ea74ccec4cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.701371 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.701415 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt9dl\" (UniqueName: \"kubernetes.io/projected/888c3365-78ea-4863-bd8b-3ea74ccec4cd-kube-api-access-jt9dl\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:52 crc kubenswrapper[4996]: I0228 09:15:52.701434 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/888c3365-78ea-4863-bd8b-3ea74ccec4cd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.097967 4996 generic.go:334] "Generic (PLEG): container finished" podID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerID="b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3" exitCode=0 Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.098196 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4k6x" event={"ID":"888c3365-78ea-4863-bd8b-3ea74ccec4cd","Type":"ContainerDied","Data":"b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3"} Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.098308 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v4k6x" event={"ID":"888c3365-78ea-4863-bd8b-3ea74ccec4cd","Type":"ContainerDied","Data":"f0aa8ff5d61a28cf6fb7aed48cf7ecc5836210e0d956075d62843fb624a48ac8"} Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.098322 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v4k6x" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.098330 4996 scope.go:117] "RemoveContainer" containerID="b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.120880 4996 scope.go:117] "RemoveContainer" containerID="2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.127492 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v4k6x"] Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.137536 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v4k6x"] Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.142423 4996 scope.go:117] "RemoveContainer" containerID="15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.163556 4996 scope.go:117] "RemoveContainer" containerID="b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3" Feb 28 09:15:53 crc kubenswrapper[4996]: E0228 09:15:53.164089 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3\": container with ID starting with b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3 not found: ID does not exist" containerID="b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.164124 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3"} err="failed to get container status \"b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3\": rpc error: code = NotFound desc = could not find container \"b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3\": container with ID starting with b28d12842ac1cc74d21375e7aa88488b914f3affc000f041d0f42d6ec98399e3 not found: ID does not exist" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.164149 4996 scope.go:117] "RemoveContainer" containerID="2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041" Feb 28 09:15:53 crc kubenswrapper[4996]: E0228 09:15:53.164529 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041\": container with ID starting with 2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041 not found: ID does not exist" containerID="2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.164579 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041"} err="failed to get container status \"2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041\": rpc error: code = NotFound desc = could not find container \"2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041\": container with ID starting with 2f68413eeeadcdc4d6f53bd31d833138f97307f164914786b5892b6236f97041 not found: ID does not exist" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.164614 4996 scope.go:117] "RemoveContainer" containerID="15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69" Feb 28 09:15:53 crc kubenswrapper[4996]: E0228 09:15:53.164973 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69\": container with ID starting with 15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69 not found: ID does not exist" containerID="15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69" Feb 28 09:15:53 crc kubenswrapper[4996]: I0228 09:15:53.165018 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69"} err="failed to get container status \"15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69\": rpc error: code = NotFound desc = could not find container \"15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69\": container with ID starting with 15a418db8c39333b53bfea66d7219361e4e6ee85dae4dd8088fdbd25d4732c69 not found: ID does not exist" Feb 28 09:15:54 crc kubenswrapper[4996]: I0228 09:15:54.589832 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6f44cf5f86-2slf9" Feb 28 09:15:55 crc kubenswrapper[4996]: I0228 09:15:55.040494 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" path="/var/lib/kubelet/pods/888c3365-78ea-4863-bd8b-3ea74ccec4cd/volumes" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.132830 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537836-ldstb"] Feb 28 09:16:00 crc kubenswrapper[4996]: E0228 09:16:00.133560 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerName="registry-server" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.133576 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerName="registry-server" Feb 28 09:16:00 crc kubenswrapper[4996]: E0228 09:16:00.133589 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerName="extract-content" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.133622 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerName="extract-content" Feb 28 09:16:00 crc kubenswrapper[4996]: E0228 09:16:00.133636 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerName="extract-utilities" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.133643 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerName="extract-utilities" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.133778 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="888c3365-78ea-4863-bd8b-3ea74ccec4cd" containerName="registry-server" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.134159 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537836-ldstb" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.136469 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.137800 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.139047 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537836-ldstb"] Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.139659 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.225774 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4xwv\" (UniqueName: \"kubernetes.io/projected/dbd90ac2-a781-47e5-9ee0-03514d3d2c99-kube-api-access-z4xwv\") pod \"auto-csr-approver-29537836-ldstb\" (UID: \"dbd90ac2-a781-47e5-9ee0-03514d3d2c99\") " pod="openshift-infra/auto-csr-approver-29537836-ldstb" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.326767 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4xwv\" (UniqueName: \"kubernetes.io/projected/dbd90ac2-a781-47e5-9ee0-03514d3d2c99-kube-api-access-z4xwv\") pod \"auto-csr-approver-29537836-ldstb\" (UID: \"dbd90ac2-a781-47e5-9ee0-03514d3d2c99\") " pod="openshift-infra/auto-csr-approver-29537836-ldstb" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.346497 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4xwv\" (UniqueName: \"kubernetes.io/projected/dbd90ac2-a781-47e5-9ee0-03514d3d2c99-kube-api-access-z4xwv\") pod \"auto-csr-approver-29537836-ldstb\" (UID: \"dbd90ac2-a781-47e5-9ee0-03514d3d2c99\") " pod="openshift-infra/auto-csr-approver-29537836-ldstb" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.502772 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537836-ldstb" Feb 28 09:16:00 crc kubenswrapper[4996]: I0228 09:16:00.710226 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537836-ldstb"] Feb 28 09:16:01 crc kubenswrapper[4996]: I0228 09:16:01.148717 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537836-ldstb" event={"ID":"dbd90ac2-a781-47e5-9ee0-03514d3d2c99","Type":"ContainerStarted","Data":"ad9a268fb8d6247240143477592af5d8b556d5b0b43e70597a4eccafd726f0f3"} Feb 28 09:16:03 crc kubenswrapper[4996]: I0228 09:16:03.164039 4996 generic.go:334] "Generic (PLEG): container finished" podID="dbd90ac2-a781-47e5-9ee0-03514d3d2c99" containerID="73f952884d1c205f42889a765ea2798ed738a56360725fd5ca62f207067646e0" exitCode=0 Feb 28 09:16:03 crc kubenswrapper[4996]: I0228 09:16:03.164165 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537836-ldstb" event={"ID":"dbd90ac2-a781-47e5-9ee0-03514d3d2c99","Type":"ContainerDied","Data":"73f952884d1c205f42889a765ea2798ed738a56360725fd5ca62f207067646e0"} Feb 28 09:16:04 crc kubenswrapper[4996]: I0228 09:16:04.440020 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537836-ldstb" Feb 28 09:16:04 crc kubenswrapper[4996]: I0228 09:16:04.491331 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4xwv\" (UniqueName: \"kubernetes.io/projected/dbd90ac2-a781-47e5-9ee0-03514d3d2c99-kube-api-access-z4xwv\") pod \"dbd90ac2-a781-47e5-9ee0-03514d3d2c99\" (UID: \"dbd90ac2-a781-47e5-9ee0-03514d3d2c99\") " Feb 28 09:16:04 crc kubenswrapper[4996]: I0228 09:16:04.496374 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbd90ac2-a781-47e5-9ee0-03514d3d2c99-kube-api-access-z4xwv" (OuterVolumeSpecName: "kube-api-access-z4xwv") pod "dbd90ac2-a781-47e5-9ee0-03514d3d2c99" (UID: "dbd90ac2-a781-47e5-9ee0-03514d3d2c99"). InnerVolumeSpecName "kube-api-access-z4xwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:16:04 crc kubenswrapper[4996]: I0228 09:16:04.592825 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4xwv\" (UniqueName: \"kubernetes.io/projected/dbd90ac2-a781-47e5-9ee0-03514d3d2c99-kube-api-access-z4xwv\") on node \"crc\" DevicePath \"\"" Feb 28 09:16:05 crc kubenswrapper[4996]: I0228 09:16:05.177830 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537836-ldstb" event={"ID":"dbd90ac2-a781-47e5-9ee0-03514d3d2c99","Type":"ContainerDied","Data":"ad9a268fb8d6247240143477592af5d8b556d5b0b43e70597a4eccafd726f0f3"} Feb 28 09:16:05 crc kubenswrapper[4996]: I0228 09:16:05.178086 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad9a268fb8d6247240143477592af5d8b556d5b0b43e70597a4eccafd726f0f3" Feb 28 09:16:05 crc kubenswrapper[4996]: I0228 09:16:05.178218 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537836-ldstb" Feb 28 09:16:05 crc kubenswrapper[4996]: I0228 09:16:05.502188 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537830-lfn9x"] Feb 28 09:16:05 crc kubenswrapper[4996]: I0228 09:16:05.508154 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537830-lfn9x"] Feb 28 09:16:07 crc kubenswrapper[4996]: I0228 09:16:07.041038 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c9ffe54-8ddf-4937-af96-bf07551c9890" path="/var/lib/kubelet/pods/0c9ffe54-8ddf-4937-af96-bf07551c9890/volumes" Feb 28 09:16:12 crc kubenswrapper[4996]: I0228 09:16:12.249583 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:16:12 crc kubenswrapper[4996]: I0228 09:16:12.249962 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.235646 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b55d58fc7-vm879" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.939683 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d"] Feb 28 09:16:14 crc kubenswrapper[4996]: E0228 09:16:14.939910 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbd90ac2-a781-47e5-9ee0-03514d3d2c99" containerName="oc" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.939921 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbd90ac2-a781-47e5-9ee0-03514d3d2c99" containerName="oc" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.940039 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbd90ac2-a781-47e5-9ee0-03514d3d2c99" containerName="oc" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.940424 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.943261 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-45bv9" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.943484 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.948832 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7kw64"] Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.951473 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.953150 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.953194 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 28 09:16:14 crc kubenswrapper[4996]: I0228 09:16:14.958484 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d"] Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.023631 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-w5v6p"] Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.027970 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030712 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics-certs\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030771 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-startup\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030814 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-reloader\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030839 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030862 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-conf\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030893 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl56d\" (UniqueName: \"kubernetes.io/projected/1e4f759a-a03c-45a7-b736-776f1556c2f5-kube-api-access-xl56d\") pod \"frr-k8s-webhook-server-7f989f654f-9dp7d\" (UID: \"1e4f759a-a03c-45a7-b736-776f1556c2f5\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030919 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qvv\" (UniqueName: \"kubernetes.io/projected/e645411e-43c5-44dd-b06a-4340e026ef8f-kube-api-access-b6qvv\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030941 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-sockets\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.030983 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e4f759a-a03c-45a7-b736-776f1556c2f5-cert\") pod \"frr-k8s-webhook-server-7f989f654f-9dp7d\" (UID: \"1e4f759a-a03c-45a7-b736-776f1556c2f5\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.031833 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-2mtmf" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.032039 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.032169 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.034699 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.043846 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-ndmxj"] Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.044662 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-ndmxj"] Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.044736 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.047191 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131654 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-reloader\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131706 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131731 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131750 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-conf\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131776 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl56d\" (UniqueName: \"kubernetes.io/projected/1e4f759a-a03c-45a7-b736-776f1556c2f5-kube-api-access-xl56d\") pod \"frr-k8s-webhook-server-7f989f654f-9dp7d\" (UID: \"1e4f759a-a03c-45a7-b736-776f1556c2f5\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131794 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qvv\" (UniqueName: \"kubernetes.io/projected/e645411e-43c5-44dd-b06a-4340e026ef8f-kube-api-access-b6qvv\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131810 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-sockets\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131829 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92jms\" (UniqueName: \"kubernetes.io/projected/2c5f5c9c-0220-40e1-9180-424aa6b0b104-kube-api-access-92jms\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131863 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c5f5c9c-0220-40e1-9180-424aa6b0b104-metrics-certs\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131894 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3f229baa-c709-4168-b123-25ee77a6f4c0-metallb-excludel2\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131909 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bhw6\" (UniqueName: \"kubernetes.io/projected/3f229baa-c709-4168-b123-25ee77a6f4c0-kube-api-access-8bhw6\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131930 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e4f759a-a03c-45a7-b736-776f1556c2f5-cert\") pod \"frr-k8s-webhook-server-7f989f654f-9dp7d\" (UID: \"1e4f759a-a03c-45a7-b736-776f1556c2f5\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131949 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics-certs\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131968 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c5f5c9c-0220-40e1-9180-424aa6b0b104-cert\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.131992 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-metrics-certs\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.132029 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-startup\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.132936 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-startup\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.133189 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-reloader\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.133470 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: E0228 09:16:15.133522 4996 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 28 09:16:15 crc kubenswrapper[4996]: E0228 09:16:15.133639 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics-certs podName:e645411e-43c5-44dd-b06a-4340e026ef8f nodeName:}" failed. No retries permitted until 2026-02-28 09:16:15.633612942 +0000 UTC m=+939.324415763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics-certs") pod "frr-k8s-7kw64" (UID: "e645411e-43c5-44dd-b06a-4340e026ef8f") : secret "frr-k8s-certs-secret" not found Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.133702 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-conf\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.133913 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e645411e-43c5-44dd-b06a-4340e026ef8f-frr-sockets\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.157563 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qvv\" (UniqueName: \"kubernetes.io/projected/e645411e-43c5-44dd-b06a-4340e026ef8f-kube-api-access-b6qvv\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.157647 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl56d\" (UniqueName: \"kubernetes.io/projected/1e4f759a-a03c-45a7-b736-776f1556c2f5-kube-api-access-xl56d\") pod \"frr-k8s-webhook-server-7f989f654f-9dp7d\" (UID: \"1e4f759a-a03c-45a7-b736-776f1556c2f5\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.158060 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1e4f759a-a03c-45a7-b736-776f1556c2f5-cert\") pod \"frr-k8s-webhook-server-7f989f654f-9dp7d\" (UID: \"1e4f759a-a03c-45a7-b736-776f1556c2f5\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.233349 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c5f5c9c-0220-40e1-9180-424aa6b0b104-cert\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.233415 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-metrics-certs\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.233462 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.233508 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92jms\" (UniqueName: \"kubernetes.io/projected/2c5f5c9c-0220-40e1-9180-424aa6b0b104-kube-api-access-92jms\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.233545 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c5f5c9c-0220-40e1-9180-424aa6b0b104-metrics-certs\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.233570 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3f229baa-c709-4168-b123-25ee77a6f4c0-metallb-excludel2\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.233588 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bhw6\" (UniqueName: \"kubernetes.io/projected/3f229baa-c709-4168-b123-25ee77a6f4c0-kube-api-access-8bhw6\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: E0228 09:16:15.233917 4996 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 28 09:16:15 crc kubenswrapper[4996]: E0228 09:16:15.233976 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist podName:3f229baa-c709-4168-b123-25ee77a6f4c0 nodeName:}" failed. No retries permitted until 2026-02-28 09:16:15.733961135 +0000 UTC m=+939.424763946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist") pod "speaker-w5v6p" (UID: "3f229baa-c709-4168-b123-25ee77a6f4c0") : secret "metallb-memberlist" not found Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.234773 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3f229baa-c709-4168-b123-25ee77a6f4c0-metallb-excludel2\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.238370 4996 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.238680 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c5f5c9c-0220-40e1-9180-424aa6b0b104-metrics-certs\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.253695 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2c5f5c9c-0220-40e1-9180-424aa6b0b104-cert\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.253769 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bhw6\" (UniqueName: \"kubernetes.io/projected/3f229baa-c709-4168-b123-25ee77a6f4c0-kube-api-access-8bhw6\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.253793 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92jms\" (UniqueName: \"kubernetes.io/projected/2c5f5c9c-0220-40e1-9180-424aa6b0b104-kube-api-access-92jms\") pod \"controller-86ddb6bd46-ndmxj\" (UID: \"2c5f5c9c-0220-40e1-9180-424aa6b0b104\") " pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.255067 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-metrics-certs\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.261345 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.365332 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.498322 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d"] Feb 28 09:16:15 crc kubenswrapper[4996]: W0228 09:16:15.501576 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e4f759a_a03c_45a7_b736_776f1556c2f5.slice/crio-17bd0b38515e44822083db8fdab3cd7b11281e3f6e1e125580ab684adc2edc14 WatchSource:0}: Error finding container 17bd0b38515e44822083db8fdab3cd7b11281e3f6e1e125580ab684adc2edc14: Status 404 returned error can't find the container with id 17bd0b38515e44822083db8fdab3cd7b11281e3f6e1e125580ab684adc2edc14 Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.604814 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-ndmxj"] Feb 28 09:16:15 crc kubenswrapper[4996]: W0228 09:16:15.612251 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c5f5c9c_0220_40e1_9180_424aa6b0b104.slice/crio-49fa58bfb3ec50572a6febbd99e3cc6b2a2e307ae51ebebe29486cff67126c65 WatchSource:0}: Error finding container 49fa58bfb3ec50572a6febbd99e3cc6b2a2e307ae51ebebe29486cff67126c65: Status 404 returned error can't find the container with id 49fa58bfb3ec50572a6febbd99e3cc6b2a2e307ae51ebebe29486cff67126c65 Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.638444 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics-certs\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.641621 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e645411e-43c5-44dd-b06a-4340e026ef8f-metrics-certs\") pod \"frr-k8s-7kw64\" (UID: \"e645411e-43c5-44dd-b06a-4340e026ef8f\") " pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.740099 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:15 crc kubenswrapper[4996]: E0228 09:16:15.740282 4996 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 28 09:16:15 crc kubenswrapper[4996]: E0228 09:16:15.740363 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist podName:3f229baa-c709-4168-b123-25ee77a6f4c0 nodeName:}" failed. No retries permitted until 2026-02-28 09:16:16.740345855 +0000 UTC m=+940.431148666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist") pod "speaker-w5v6p" (UID: "3f229baa-c709-4168-b123-25ee77a6f4c0") : secret "metallb-memberlist" not found Feb 28 09:16:15 crc kubenswrapper[4996]: I0228 09:16:15.870390 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.259500 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerStarted","Data":"59c7dc06a5518cb533fe1ca37b710fc8c5119052dc84d66010ebf68fc38dda2b"} Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.261299 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-ndmxj" event={"ID":"2c5f5c9c-0220-40e1-9180-424aa6b0b104","Type":"ContainerStarted","Data":"40b698dc6e42b505947815e36d901765a27c0735e0cfa8fbd7b14cbe5844e898"} Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.261341 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-ndmxj" event={"ID":"2c5f5c9c-0220-40e1-9180-424aa6b0b104","Type":"ContainerStarted","Data":"4f570f22ca858c6abf69a7f6b901a80a670f4fd5deeea8419887d0de12c37d35"} Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.261355 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-ndmxj" event={"ID":"2c5f5c9c-0220-40e1-9180-424aa6b0b104","Type":"ContainerStarted","Data":"49fa58bfb3ec50572a6febbd99e3cc6b2a2e307ae51ebebe29486cff67126c65"} Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.261389 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.263206 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" event={"ID":"1e4f759a-a03c-45a7-b736-776f1556c2f5","Type":"ContainerStarted","Data":"17bd0b38515e44822083db8fdab3cd7b11281e3f6e1e125580ab684adc2edc14"} Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.277756 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-ndmxj" podStartSLOduration=1.277735089 podStartE2EDuration="1.277735089s" podCreationTimestamp="2026-02-28 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:16:16.274212712 +0000 UTC m=+939.965015573" watchObservedRunningTime="2026-02-28 09:16:16.277735089 +0000 UTC m=+939.968537900" Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.753317 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.771066 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3f229baa-c709-4168-b123-25ee77a6f4c0-memberlist\") pod \"speaker-w5v6p\" (UID: \"3f229baa-c709-4168-b123-25ee77a6f4c0\") " pod="metallb-system/speaker-w5v6p" Feb 28 09:16:16 crc kubenswrapper[4996]: I0228 09:16:16.844823 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w5v6p" Feb 28 09:16:17 crc kubenswrapper[4996]: I0228 09:16:17.275385 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w5v6p" event={"ID":"3f229baa-c709-4168-b123-25ee77a6f4c0","Type":"ContainerStarted","Data":"b91c8b39b41aa08a10e60fe0b42166dd977d98e6128cd5d0da32cc63e3c0ef23"} Feb 28 09:16:17 crc kubenswrapper[4996]: I0228 09:16:17.275782 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w5v6p" event={"ID":"3f229baa-c709-4168-b123-25ee77a6f4c0","Type":"ContainerStarted","Data":"4ae2cc8df54477925cddf04db657dd6eafc2e0ab37002d81ca1faddbaf0398a0"} Feb 28 09:16:18 crc kubenswrapper[4996]: I0228 09:16:18.284979 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w5v6p" event={"ID":"3f229baa-c709-4168-b123-25ee77a6f4c0","Type":"ContainerStarted","Data":"6f3294ec078c1fbd1a62157022612f618f9f11bd9ce54bd324eb24d85e4a9be7"} Feb 28 09:16:18 crc kubenswrapper[4996]: I0228 09:16:18.285338 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-w5v6p" Feb 28 09:16:18 crc kubenswrapper[4996]: I0228 09:16:18.304731 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-w5v6p" podStartSLOduration=4.304715563 podStartE2EDuration="4.304715563s" podCreationTimestamp="2026-02-28 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:16:18.303538694 +0000 UTC m=+941.994341525" watchObservedRunningTime="2026-02-28 09:16:18.304715563 +0000 UTC m=+941.995518364" Feb 28 09:16:23 crc kubenswrapper[4996]: I0228 09:16:23.339727 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" event={"ID":"1e4f759a-a03c-45a7-b736-776f1556c2f5","Type":"ContainerStarted","Data":"e40e54b006351f6e8dc65f3c28558c5643a6770ad3e9616ad109470a394691fc"} Feb 28 09:16:23 crc kubenswrapper[4996]: I0228 09:16:23.340358 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:23 crc kubenswrapper[4996]: I0228 09:16:23.341866 4996 generic.go:334] "Generic (PLEG): container finished" podID="e645411e-43c5-44dd-b06a-4340e026ef8f" containerID="794ceb1e3ae52fd25cd04408f4f113e6756f3fffc3cef44dfdd9f409fd0bf443" exitCode=0 Feb 28 09:16:23 crc kubenswrapper[4996]: I0228 09:16:23.341900 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerDied","Data":"794ceb1e3ae52fd25cd04408f4f113e6756f3fffc3cef44dfdd9f409fd0bf443"} Feb 28 09:16:23 crc kubenswrapper[4996]: I0228 09:16:23.369719 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" podStartSLOduration=2.627790966 podStartE2EDuration="9.369702069s" podCreationTimestamp="2026-02-28 09:16:14 +0000 UTC" firstStartedPulling="2026-02-28 09:16:15.504163554 +0000 UTC m=+939.194966365" lastFinishedPulling="2026-02-28 09:16:22.246074647 +0000 UTC m=+945.936877468" observedRunningTime="2026-02-28 09:16:23.369444453 +0000 UTC m=+947.060247274" watchObservedRunningTime="2026-02-28 09:16:23.369702069 +0000 UTC m=+947.060504890" Feb 28 09:16:24 crc kubenswrapper[4996]: I0228 09:16:24.350625 4996 generic.go:334] "Generic (PLEG): container finished" podID="e645411e-43c5-44dd-b06a-4340e026ef8f" containerID="c0b303c10112c0c1e837e887842606f062ace4969909952c712dac62259d3ce1" exitCode=0 Feb 28 09:16:24 crc kubenswrapper[4996]: I0228 09:16:24.350752 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerDied","Data":"c0b303c10112c0c1e837e887842606f062ace4969909952c712dac62259d3ce1"} Feb 28 09:16:25 crc kubenswrapper[4996]: I0228 09:16:25.362127 4996 generic.go:334] "Generic (PLEG): container finished" podID="e645411e-43c5-44dd-b06a-4340e026ef8f" containerID="970754972f65aa4855ea9b2c295172877e3d6c929d8d612a7edf6291e3db8661" exitCode=0 Feb 28 09:16:25 crc kubenswrapper[4996]: I0228 09:16:25.362190 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerDied","Data":"970754972f65aa4855ea9b2c295172877e3d6c929d8d612a7edf6291e3db8661"} Feb 28 09:16:25 crc kubenswrapper[4996]: I0228 09:16:25.372640 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-ndmxj" Feb 28 09:16:26 crc kubenswrapper[4996]: I0228 09:16:26.372603 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerStarted","Data":"d421eb2e6ccd1bb609b30cce6a5955997281ab9776d89d2b9b744f2e05e116ab"} Feb 28 09:16:26 crc kubenswrapper[4996]: I0228 09:16:26.373035 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerStarted","Data":"693d827ff4e05b3e0a4e289cabeeffeec56cbee33ddb73d793091df85403532e"} Feb 28 09:16:26 crc kubenswrapper[4996]: I0228 09:16:26.373044 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerStarted","Data":"744235fc10c7a6c1b1a5a71604f93598edf17f48f1b9a3069dc3ada1d4824083"} Feb 28 09:16:26 crc kubenswrapper[4996]: I0228 09:16:26.373053 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerStarted","Data":"4728b19cdbb94663373d53b5543f68d1a5853df6aa1f34a41b0d7ea821038ca2"} Feb 28 09:16:27 crc kubenswrapper[4996]: I0228 09:16:27.385240 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerStarted","Data":"948fc3dd8af2cc7988a37f32356ae5683be26365314f57e04e9ae56d0487df82"} Feb 28 09:16:27 crc kubenswrapper[4996]: I0228 09:16:27.385658 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:27 crc kubenswrapper[4996]: I0228 09:16:27.385676 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7kw64" event={"ID":"e645411e-43c5-44dd-b06a-4340e026ef8f","Type":"ContainerStarted","Data":"8a159efd8f58448e2eea859831203a17dd79afc190ea05a79e8ce26566e39168"} Feb 28 09:16:27 crc kubenswrapper[4996]: I0228 09:16:27.417885 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7kw64" podStartSLOduration=7.125061969 podStartE2EDuration="13.417868134s" podCreationTimestamp="2026-02-28 09:16:14 +0000 UTC" firstStartedPulling="2026-02-28 09:16:15.984714817 +0000 UTC m=+939.675517628" lastFinishedPulling="2026-02-28 09:16:22.277520932 +0000 UTC m=+945.968323793" observedRunningTime="2026-02-28 09:16:27.413510677 +0000 UTC m=+951.104313488" watchObservedRunningTime="2026-02-28 09:16:27.417868134 +0000 UTC m=+951.108670955" Feb 28 09:16:30 crc kubenswrapper[4996]: I0228 09:16:30.871220 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:30 crc kubenswrapper[4996]: I0228 09:16:30.912870 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:35 crc kubenswrapper[4996]: I0228 09:16:35.268342 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-9dp7d" Feb 28 09:16:35 crc kubenswrapper[4996]: I0228 09:16:35.873451 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7kw64" Feb 28 09:16:36 crc kubenswrapper[4996]: I0228 09:16:36.850872 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-w5v6p" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.352326 4996 scope.go:117] "RemoveContainer" containerID="0c1dac682c65079cbfba1379251d72657f952987e21c49c2128b43019d5efef6" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.665109 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k9kvc"] Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.666139 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k9kvc" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.678352 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.678537 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.678662 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-q8zlr" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.678853 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k9kvc"] Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.844763 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mshr\" (UniqueName: \"kubernetes.io/projected/ff1c5b1d-e22f-489a-8fe2-0965d2dd8910-kube-api-access-2mshr\") pod \"openstack-operator-index-k9kvc\" (UID: \"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910\") " pod="openstack-operators/openstack-operator-index-k9kvc" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.946248 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mshr\" (UniqueName: \"kubernetes.io/projected/ff1c5b1d-e22f-489a-8fe2-0965d2dd8910-kube-api-access-2mshr\") pod \"openstack-operator-index-k9kvc\" (UID: \"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910\") " pod="openstack-operators/openstack-operator-index-k9kvc" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.965621 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mshr\" (UniqueName: \"kubernetes.io/projected/ff1c5b1d-e22f-489a-8fe2-0965d2dd8910-kube-api-access-2mshr\") pod \"openstack-operator-index-k9kvc\" (UID: \"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910\") " pod="openstack-operators/openstack-operator-index-k9kvc" Feb 28 09:16:39 crc kubenswrapper[4996]: I0228 09:16:39.998447 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k9kvc" Feb 28 09:16:40 crc kubenswrapper[4996]: I0228 09:16:40.434026 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k9kvc"] Feb 28 09:16:40 crc kubenswrapper[4996]: W0228 09:16:40.443107 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff1c5b1d_e22f_489a_8fe2_0965d2dd8910.slice/crio-2e721b4a320b3e5137ab31bfbcfd04e86dbd5fc4255510fc8e2286cbeec76139 WatchSource:0}: Error finding container 2e721b4a320b3e5137ab31bfbcfd04e86dbd5fc4255510fc8e2286cbeec76139: Status 404 returned error can't find the container with id 2e721b4a320b3e5137ab31bfbcfd04e86dbd5fc4255510fc8e2286cbeec76139 Feb 28 09:16:40 crc kubenswrapper[4996]: I0228 09:16:40.477852 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k9kvc" event={"ID":"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910","Type":"ContainerStarted","Data":"2e721b4a320b3e5137ab31bfbcfd04e86dbd5fc4255510fc8e2286cbeec76139"} Feb 28 09:16:42 crc kubenswrapper[4996]: I0228 09:16:42.248895 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:16:42 crc kubenswrapper[4996]: I0228 09:16:42.249239 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.007870 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k9kvc"] Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.500200 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k9kvc" event={"ID":"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910","Type":"ContainerStarted","Data":"7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7"} Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.525234 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k9kvc" podStartSLOduration=2.309818276 podStartE2EDuration="4.525213553s" podCreationTimestamp="2026-02-28 09:16:39 +0000 UTC" firstStartedPulling="2026-02-28 09:16:40.44568973 +0000 UTC m=+964.136492551" lastFinishedPulling="2026-02-28 09:16:42.661085017 +0000 UTC m=+966.351887828" observedRunningTime="2026-02-28 09:16:43.520847476 +0000 UTC m=+967.211650327" watchObservedRunningTime="2026-02-28 09:16:43.525213553 +0000 UTC m=+967.216016374" Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.616336 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6cj4k"] Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.617347 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.628858 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6cj4k"] Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.797392 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9xs7\" (UniqueName: \"kubernetes.io/projected/55ef28a1-cfbb-4a48-8b2c-c3f0784976fd-kube-api-access-l9xs7\") pod \"openstack-operator-index-6cj4k\" (UID: \"55ef28a1-cfbb-4a48-8b2c-c3f0784976fd\") " pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.898741 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9xs7\" (UniqueName: \"kubernetes.io/projected/55ef28a1-cfbb-4a48-8b2c-c3f0784976fd-kube-api-access-l9xs7\") pod \"openstack-operator-index-6cj4k\" (UID: \"55ef28a1-cfbb-4a48-8b2c-c3f0784976fd\") " pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.917045 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9xs7\" (UniqueName: \"kubernetes.io/projected/55ef28a1-cfbb-4a48-8b2c-c3f0784976fd-kube-api-access-l9xs7\") pod \"openstack-operator-index-6cj4k\" (UID: \"55ef28a1-cfbb-4a48-8b2c-c3f0784976fd\") " pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:43 crc kubenswrapper[4996]: I0228 09:16:43.945608 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:44 crc kubenswrapper[4996]: I0228 09:16:44.369381 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6cj4k"] Feb 28 09:16:44 crc kubenswrapper[4996]: I0228 09:16:44.507317 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6cj4k" event={"ID":"55ef28a1-cfbb-4a48-8b2c-c3f0784976fd","Type":"ContainerStarted","Data":"cef1fbff88a0dcf4abd1c3490baff87020909beebc707019af5a5315759b27e6"} Feb 28 09:16:44 crc kubenswrapper[4996]: I0228 09:16:44.507380 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-k9kvc" podUID="ff1c5b1d-e22f-489a-8fe2-0965d2dd8910" containerName="registry-server" containerID="cri-o://7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7" gracePeriod=2 Feb 28 09:16:44 crc kubenswrapper[4996]: I0228 09:16:44.848205 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k9kvc" Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.015678 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mshr\" (UniqueName: \"kubernetes.io/projected/ff1c5b1d-e22f-489a-8fe2-0965d2dd8910-kube-api-access-2mshr\") pod \"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910\" (UID: \"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910\") " Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.022563 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1c5b1d-e22f-489a-8fe2-0965d2dd8910-kube-api-access-2mshr" (OuterVolumeSpecName: "kube-api-access-2mshr") pod "ff1c5b1d-e22f-489a-8fe2-0965d2dd8910" (UID: "ff1c5b1d-e22f-489a-8fe2-0965d2dd8910"). InnerVolumeSpecName "kube-api-access-2mshr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.117632 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mshr\" (UniqueName: \"kubernetes.io/projected/ff1c5b1d-e22f-489a-8fe2-0965d2dd8910-kube-api-access-2mshr\") on node \"crc\" DevicePath \"\"" Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.514925 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6cj4k" event={"ID":"55ef28a1-cfbb-4a48-8b2c-c3f0784976fd","Type":"ContainerStarted","Data":"b05bf34a6588144726b2e89113cedec57750b5e46787adf7f3f9e9db1f4cbd95"} Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.516810 4996 generic.go:334] "Generic (PLEG): container finished" podID="ff1c5b1d-e22f-489a-8fe2-0965d2dd8910" containerID="7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7" exitCode=0 Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.516860 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k9kvc" event={"ID":"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910","Type":"ContainerDied","Data":"7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7"} Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.516890 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k9kvc" event={"ID":"ff1c5b1d-e22f-489a-8fe2-0965d2dd8910","Type":"ContainerDied","Data":"2e721b4a320b3e5137ab31bfbcfd04e86dbd5fc4255510fc8e2286cbeec76139"} Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.516921 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k9kvc" Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.516936 4996 scope.go:117] "RemoveContainer" containerID="7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7" Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.540062 4996 scope.go:117] "RemoveContainer" containerID="7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7" Feb 28 09:16:45 crc kubenswrapper[4996]: E0228 09:16:45.540620 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7\": container with ID starting with 7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7 not found: ID does not exist" containerID="7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7" Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.540664 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7"} err="failed to get container status \"7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7\": rpc error: code = NotFound desc = could not find container \"7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7\": container with ID starting with 7626950428b82b42dd6d0cd942321a7854bb0c67eb63c6932f5bdb99a46082a7 not found: ID does not exist" Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.541506 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6cj4k" podStartSLOduration=2.494382751 podStartE2EDuration="2.541488533s" podCreationTimestamp="2026-02-28 09:16:43 +0000 UTC" firstStartedPulling="2026-02-28 09:16:44.382580272 +0000 UTC m=+968.073383083" lastFinishedPulling="2026-02-28 09:16:44.429686064 +0000 UTC m=+968.120488865" observedRunningTime="2026-02-28 09:16:45.534536922 +0000 UTC m=+969.225339743" watchObservedRunningTime="2026-02-28 09:16:45.541488533 +0000 UTC m=+969.232291354" Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.552161 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-k9kvc"] Feb 28 09:16:45 crc kubenswrapper[4996]: I0228 09:16:45.556178 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-k9kvc"] Feb 28 09:16:47 crc kubenswrapper[4996]: I0228 09:16:47.042378 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1c5b1d-e22f-489a-8fe2-0965d2dd8910" path="/var/lib/kubelet/pods/ff1c5b1d-e22f-489a-8fe2-0965d2dd8910/volumes" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.425163 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kvntc"] Feb 28 09:16:50 crc kubenswrapper[4996]: E0228 09:16:50.425865 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1c5b1d-e22f-489a-8fe2-0965d2dd8910" containerName="registry-server" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.425887 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1c5b1d-e22f-489a-8fe2-0965d2dd8910" containerName="registry-server" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.426124 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1c5b1d-e22f-489a-8fe2-0965d2dd8910" containerName="registry-server" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.427590 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.448531 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvntc"] Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.488804 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz4mv\" (UniqueName: \"kubernetes.io/projected/f16f7d94-dd7f-4b3a-addf-9780628ac217-kube-api-access-bz4mv\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.488881 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-catalog-content\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.488979 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-utilities\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.589947 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-utilities\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.590287 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz4mv\" (UniqueName: \"kubernetes.io/projected/f16f7d94-dd7f-4b3a-addf-9780628ac217-kube-api-access-bz4mv\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.590330 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-catalog-content\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.590395 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-utilities\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.590656 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-catalog-content\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.613099 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz4mv\" (UniqueName: \"kubernetes.io/projected/f16f7d94-dd7f-4b3a-addf-9780628ac217-kube-api-access-bz4mv\") pod \"redhat-marketplace-kvntc\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:50 crc kubenswrapper[4996]: I0228 09:16:50.812019 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:16:51 crc kubenswrapper[4996]: I0228 09:16:51.041207 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvntc"] Feb 28 09:16:51 crc kubenswrapper[4996]: I0228 09:16:51.560326 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvntc" event={"ID":"f16f7d94-dd7f-4b3a-addf-9780628ac217","Type":"ContainerStarted","Data":"9c40536c5b94a4e0dcb04f2bb7efa73d985fe4abf8344e55eb170192eb2d8a91"} Feb 28 09:16:52 crc kubenswrapper[4996]: I0228 09:16:52.570691 4996 generic.go:334] "Generic (PLEG): container finished" podID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerID="6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c" exitCode=0 Feb 28 09:16:52 crc kubenswrapper[4996]: I0228 09:16:52.570764 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvntc" event={"ID":"f16f7d94-dd7f-4b3a-addf-9780628ac217","Type":"ContainerDied","Data":"6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c"} Feb 28 09:16:53 crc kubenswrapper[4996]: I0228 09:16:53.946727 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:53 crc kubenswrapper[4996]: I0228 09:16:53.947137 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:53 crc kubenswrapper[4996]: I0228 09:16:53.985805 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:54 crc kubenswrapper[4996]: I0228 09:16:54.612329 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6cj4k" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.651348 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd"] Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.652914 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.660580 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-dvdhv" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.663061 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd"] Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.713889 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-util\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.714098 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-bundle\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.714197 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xc4\" (UniqueName: \"kubernetes.io/projected/45ec8d80-174c-4265-8b8e-dfdda274e589-kube-api-access-c4xc4\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.815621 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-bundle\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.816298 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-bundle\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.816486 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4xc4\" (UniqueName: \"kubernetes.io/projected/45ec8d80-174c-4265-8b8e-dfdda274e589-kube-api-access-c4xc4\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.816931 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-util\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.818034 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-util\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.841342 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4xc4\" (UniqueName: \"kubernetes.io/projected/45ec8d80-174c-4265-8b8e-dfdda274e589-kube-api-access-c4xc4\") pod \"3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:58 crc kubenswrapper[4996]: I0228 09:16:58.976265 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:16:59 crc kubenswrapper[4996]: I0228 09:16:59.437743 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd"] Feb 28 09:16:59 crc kubenswrapper[4996]: W0228 09:16:59.442061 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45ec8d80_174c_4265_8b8e_dfdda274e589.slice/crio-92d95abab762aedaf45bfec28a79c37ac69f8b927397593079e08516bf39d318 WatchSource:0}: Error finding container 92d95abab762aedaf45bfec28a79c37ac69f8b927397593079e08516bf39d318: Status 404 returned error can't find the container with id 92d95abab762aedaf45bfec28a79c37ac69f8b927397593079e08516bf39d318 Feb 28 09:16:59 crc kubenswrapper[4996]: I0228 09:16:59.618581 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" event={"ID":"45ec8d80-174c-4265-8b8e-dfdda274e589","Type":"ContainerStarted","Data":"6c959f3c57584bc49b5e011b5cbb2365934a7a1aa7d2f605cc78b727424d7b81"} Feb 28 09:16:59 crc kubenswrapper[4996]: I0228 09:16:59.618637 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" event={"ID":"45ec8d80-174c-4265-8b8e-dfdda274e589","Type":"ContainerStarted","Data":"92d95abab762aedaf45bfec28a79c37ac69f8b927397593079e08516bf39d318"} Feb 28 09:16:59 crc kubenswrapper[4996]: I0228 09:16:59.620715 4996 generic.go:334] "Generic (PLEG): container finished" podID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerID="866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09" exitCode=0 Feb 28 09:16:59 crc kubenswrapper[4996]: I0228 09:16:59.620816 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvntc" event={"ID":"f16f7d94-dd7f-4b3a-addf-9780628ac217","Type":"ContainerDied","Data":"866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09"} Feb 28 09:17:00 crc kubenswrapper[4996]: I0228 09:17:00.630544 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvntc" event={"ID":"f16f7d94-dd7f-4b3a-addf-9780628ac217","Type":"ContainerStarted","Data":"f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3"} Feb 28 09:17:00 crc kubenswrapper[4996]: I0228 09:17:00.632579 4996 generic.go:334] "Generic (PLEG): container finished" podID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerID="6c959f3c57584bc49b5e011b5cbb2365934a7a1aa7d2f605cc78b727424d7b81" exitCode=0 Feb 28 09:17:00 crc kubenswrapper[4996]: I0228 09:17:00.632608 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" event={"ID":"45ec8d80-174c-4265-8b8e-dfdda274e589","Type":"ContainerDied","Data":"6c959f3c57584bc49b5e011b5cbb2365934a7a1aa7d2f605cc78b727424d7b81"} Feb 28 09:17:00 crc kubenswrapper[4996]: I0228 09:17:00.656641 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kvntc" podStartSLOduration=9.240295352 podStartE2EDuration="10.656616863s" podCreationTimestamp="2026-02-28 09:16:50 +0000 UTC" firstStartedPulling="2026-02-28 09:16:58.61117097 +0000 UTC m=+982.301973781" lastFinishedPulling="2026-02-28 09:17:00.027492481 +0000 UTC m=+983.718295292" observedRunningTime="2026-02-28 09:17:00.653195298 +0000 UTC m=+984.343998109" watchObservedRunningTime="2026-02-28 09:17:00.656616863 +0000 UTC m=+984.347419714" Feb 28 09:17:00 crc kubenswrapper[4996]: I0228 09:17:00.812289 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:17:00 crc kubenswrapper[4996]: I0228 09:17:00.812497 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:17:01 crc kubenswrapper[4996]: I0228 09:17:01.640586 4996 generic.go:334] "Generic (PLEG): container finished" podID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerID="4a3f99c84918da9b0d592582ae1fe90648cb239eba6681cad3d01f3e33628e68" exitCode=0 Feb 28 09:17:01 crc kubenswrapper[4996]: I0228 09:17:01.640714 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" event={"ID":"45ec8d80-174c-4265-8b8e-dfdda274e589","Type":"ContainerDied","Data":"4a3f99c84918da9b0d592582ae1fe90648cb239eba6681cad3d01f3e33628e68"} Feb 28 09:17:01 crc kubenswrapper[4996]: I0228 09:17:01.924119 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-kvntc" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="registry-server" probeResult="failure" output=< Feb 28 09:17:01 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 09:17:01 crc kubenswrapper[4996]: > Feb 28 09:17:02 crc kubenswrapper[4996]: I0228 09:17:02.651206 4996 generic.go:334] "Generic (PLEG): container finished" podID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerID="0d0a65172e23e0438c28e3a8ca01ac425e741968f280aca0c41033f53cca7d63" exitCode=0 Feb 28 09:17:02 crc kubenswrapper[4996]: I0228 09:17:02.651285 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" event={"ID":"45ec8d80-174c-4265-8b8e-dfdda274e589","Type":"ContainerDied","Data":"0d0a65172e23e0438c28e3a8ca01ac425e741968f280aca0c41033f53cca7d63"} Feb 28 09:17:03 crc kubenswrapper[4996]: I0228 09:17:03.955453 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.088653 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4xc4\" (UniqueName: \"kubernetes.io/projected/45ec8d80-174c-4265-8b8e-dfdda274e589-kube-api-access-c4xc4\") pod \"45ec8d80-174c-4265-8b8e-dfdda274e589\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.089357 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-bundle\") pod \"45ec8d80-174c-4265-8b8e-dfdda274e589\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.089412 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-util\") pod \"45ec8d80-174c-4265-8b8e-dfdda274e589\" (UID: \"45ec8d80-174c-4265-8b8e-dfdda274e589\") " Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.090806 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-bundle" (OuterVolumeSpecName: "bundle") pod "45ec8d80-174c-4265-8b8e-dfdda274e589" (UID: "45ec8d80-174c-4265-8b8e-dfdda274e589"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.094907 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45ec8d80-174c-4265-8b8e-dfdda274e589-kube-api-access-c4xc4" (OuterVolumeSpecName: "kube-api-access-c4xc4") pod "45ec8d80-174c-4265-8b8e-dfdda274e589" (UID: "45ec8d80-174c-4265-8b8e-dfdda274e589"). InnerVolumeSpecName "kube-api-access-c4xc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.106334 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-util" (OuterVolumeSpecName: "util") pod "45ec8d80-174c-4265-8b8e-dfdda274e589" (UID: "45ec8d80-174c-4265-8b8e-dfdda274e589"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.190616 4996 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.190650 4996 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/45ec8d80-174c-4265-8b8e-dfdda274e589-util\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.190660 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4xc4\" (UniqueName: \"kubernetes.io/projected/45ec8d80-174c-4265-8b8e-dfdda274e589-kube-api-access-c4xc4\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.672461 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" event={"ID":"45ec8d80-174c-4265-8b8e-dfdda274e589","Type":"ContainerDied","Data":"92d95abab762aedaf45bfec28a79c37ac69f8b927397593079e08516bf39d318"} Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.672522 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d95abab762aedaf45bfec28a79c37ac69f8b927397593079e08516bf39d318" Feb 28 09:17:04 crc kubenswrapper[4996]: I0228 09:17:04.672646 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.610562 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5bdzw"] Feb 28 09:17:06 crc kubenswrapper[4996]: E0228 09:17:06.611197 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerName="util" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.611215 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerName="util" Feb 28 09:17:06 crc kubenswrapper[4996]: E0228 09:17:06.611230 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerName="extract" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.611238 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerName="extract" Feb 28 09:17:06 crc kubenswrapper[4996]: E0228 09:17:06.611250 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerName="pull" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.611258 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerName="pull" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.611393 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="45ec8d80-174c-4265-8b8e-dfdda274e589" containerName="extract" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.612414 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.627977 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bdzw"] Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.724362 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-utilities\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.724424 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khkcd\" (UniqueName: \"kubernetes.io/projected/bcc66c74-d127-4d39-9505-924c7484fc45-kube-api-access-khkcd\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.724561 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-catalog-content\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.825889 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-utilities\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.825979 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khkcd\" (UniqueName: \"kubernetes.io/projected/bcc66c74-d127-4d39-9505-924c7484fc45-kube-api-access-khkcd\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.826107 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-catalog-content\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.826436 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-utilities\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.826721 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-catalog-content\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.841231 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khkcd\" (UniqueName: \"kubernetes.io/projected/bcc66c74-d127-4d39-9505-924c7484fc45-kube-api-access-khkcd\") pod \"community-operators-5bdzw\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:06 crc kubenswrapper[4996]: I0228 09:17:06.929886 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:07 crc kubenswrapper[4996]: I0228 09:17:07.403229 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5bdzw"] Feb 28 09:17:07 crc kubenswrapper[4996]: I0228 09:17:07.693083 4996 generic.go:334] "Generic (PLEG): container finished" podID="bcc66c74-d127-4d39-9505-924c7484fc45" containerID="80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589" exitCode=0 Feb 28 09:17:07 crc kubenswrapper[4996]: I0228 09:17:07.693157 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bdzw" event={"ID":"bcc66c74-d127-4d39-9505-924c7484fc45","Type":"ContainerDied","Data":"80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589"} Feb 28 09:17:07 crc kubenswrapper[4996]: I0228 09:17:07.693379 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bdzw" event={"ID":"bcc66c74-d127-4d39-9505-924c7484fc45","Type":"ContainerStarted","Data":"14f57866507c2ee5659c25148e9d8f231fd4ce31374b79a4617a0ff528281fec"} Feb 28 09:17:08 crc kubenswrapper[4996]: I0228 09:17:08.701198 4996 generic.go:334] "Generic (PLEG): container finished" podID="bcc66c74-d127-4d39-9505-924c7484fc45" containerID="877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e" exitCode=0 Feb 28 09:17:08 crc kubenswrapper[4996]: I0228 09:17:08.701263 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bdzw" event={"ID":"bcc66c74-d127-4d39-9505-924c7484fc45","Type":"ContainerDied","Data":"877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e"} Feb 28 09:17:09 crc kubenswrapper[4996]: I0228 09:17:09.712043 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bdzw" event={"ID":"bcc66c74-d127-4d39-9505-924c7484fc45","Type":"ContainerStarted","Data":"2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e"} Feb 28 09:17:09 crc kubenswrapper[4996]: I0228 09:17:09.742360 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5bdzw" podStartSLOduration=2.373048403 podStartE2EDuration="3.742329561s" podCreationTimestamp="2026-02-28 09:17:06 +0000 UTC" firstStartedPulling="2026-02-28 09:17:07.695481033 +0000 UTC m=+991.386283844" lastFinishedPulling="2026-02-28 09:17:09.064762151 +0000 UTC m=+992.755565002" observedRunningTime="2026-02-28 09:17:09.734091217 +0000 UTC m=+993.424894038" watchObservedRunningTime="2026-02-28 09:17:09.742329561 +0000 UTC m=+993.433132392" Feb 28 09:17:09 crc kubenswrapper[4996]: I0228 09:17:09.775084 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j"] Feb 28 09:17:09 crc kubenswrapper[4996]: I0228 09:17:09.775805 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" Feb 28 09:17:09 crc kubenswrapper[4996]: I0228 09:17:09.777494 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-t8vd4" Feb 28 09:17:09 crc kubenswrapper[4996]: I0228 09:17:09.786177 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j"] Feb 28 09:17:09 crc kubenswrapper[4996]: I0228 09:17:09.894434 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc5tk\" (UniqueName: \"kubernetes.io/projected/e8534bde-79ad-4654-8a2b-8fa14ee7266b-kube-api-access-fc5tk\") pod \"openstack-operator-controller-init-646b94fdfc-pc26j\" (UID: \"e8534bde-79ad-4654-8a2b-8fa14ee7266b\") " pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" Feb 28 09:17:09 crc kubenswrapper[4996]: I0228 09:17:09.995707 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc5tk\" (UniqueName: \"kubernetes.io/projected/e8534bde-79ad-4654-8a2b-8fa14ee7266b-kube-api-access-fc5tk\") pod \"openstack-operator-controller-init-646b94fdfc-pc26j\" (UID: \"e8534bde-79ad-4654-8a2b-8fa14ee7266b\") " pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" Feb 28 09:17:10 crc kubenswrapper[4996]: I0228 09:17:10.030097 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc5tk\" (UniqueName: \"kubernetes.io/projected/e8534bde-79ad-4654-8a2b-8fa14ee7266b-kube-api-access-fc5tk\") pod \"openstack-operator-controller-init-646b94fdfc-pc26j\" (UID: \"e8534bde-79ad-4654-8a2b-8fa14ee7266b\") " pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" Feb 28 09:17:10 crc kubenswrapper[4996]: I0228 09:17:10.096124 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" Feb 28 09:17:10 crc kubenswrapper[4996]: I0228 09:17:10.356650 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j"] Feb 28 09:17:10 crc kubenswrapper[4996]: I0228 09:17:10.718501 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" event={"ID":"e8534bde-79ad-4654-8a2b-8fa14ee7266b","Type":"ContainerStarted","Data":"0a023ec98fdf24c434e05856e09cef2da575397b5b4327bf96785cbfc0035f78"} Feb 28 09:17:10 crc kubenswrapper[4996]: I0228 09:17:10.871400 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:17:10 crc kubenswrapper[4996]: I0228 09:17:10.922332 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:17:12 crc kubenswrapper[4996]: I0228 09:17:12.249259 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:17:12 crc kubenswrapper[4996]: I0228 09:17:12.249316 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:17:12 crc kubenswrapper[4996]: I0228 09:17:12.249361 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:17:12 crc kubenswrapper[4996]: I0228 09:17:12.249900 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02e43fcaf1e32104b093babde57c895a435eb8e935013328e1ffee5be20b3dec"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:17:12 crc kubenswrapper[4996]: I0228 09:17:12.249951 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://02e43fcaf1e32104b093babde57c895a435eb8e935013328e1ffee5be20b3dec" gracePeriod=600 Feb 28 09:17:12 crc kubenswrapper[4996]: I0228 09:17:12.733710 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="02e43fcaf1e32104b093babde57c895a435eb8e935013328e1ffee5be20b3dec" exitCode=0 Feb 28 09:17:12 crc kubenswrapper[4996]: I0228 09:17:12.733981 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"02e43fcaf1e32104b093babde57c895a435eb8e935013328e1ffee5be20b3dec"} Feb 28 09:17:12 crc kubenswrapper[4996]: I0228 09:17:12.734073 4996 scope.go:117] "RemoveContainer" containerID="15be7a1e2ea878c9bfdd0618662f5c8e4a5e11c78306b49a8bacc5ba71758e6f" Feb 28 09:17:14 crc kubenswrapper[4996]: I0228 09:17:14.403533 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvntc"] Feb 28 09:17:14 crc kubenswrapper[4996]: I0228 09:17:14.405725 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kvntc" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="registry-server" containerID="cri-o://f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3" gracePeriod=2 Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.250816 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.373584 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-catalog-content\") pod \"f16f7d94-dd7f-4b3a-addf-9780628ac217\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.373744 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-utilities\") pod \"f16f7d94-dd7f-4b3a-addf-9780628ac217\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.373793 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz4mv\" (UniqueName: \"kubernetes.io/projected/f16f7d94-dd7f-4b3a-addf-9780628ac217-kube-api-access-bz4mv\") pod \"f16f7d94-dd7f-4b3a-addf-9780628ac217\" (UID: \"f16f7d94-dd7f-4b3a-addf-9780628ac217\") " Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.374671 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-utilities" (OuterVolumeSpecName: "utilities") pod "f16f7d94-dd7f-4b3a-addf-9780628ac217" (UID: "f16f7d94-dd7f-4b3a-addf-9780628ac217"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.381844 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f16f7d94-dd7f-4b3a-addf-9780628ac217-kube-api-access-bz4mv" (OuterVolumeSpecName: "kube-api-access-bz4mv") pod "f16f7d94-dd7f-4b3a-addf-9780628ac217" (UID: "f16f7d94-dd7f-4b3a-addf-9780628ac217"). InnerVolumeSpecName "kube-api-access-bz4mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.408617 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f16f7d94-dd7f-4b3a-addf-9780628ac217" (UID: "f16f7d94-dd7f-4b3a-addf-9780628ac217"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.475935 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.475980 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f16f7d94-dd7f-4b3a-addf-9780628ac217-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.475991 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz4mv\" (UniqueName: \"kubernetes.io/projected/f16f7d94-dd7f-4b3a-addf-9780628ac217-kube-api-access-bz4mv\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.771514 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"4772ad3990edf9d0c6d563de92a45c70bf5a82075c0fa4fd5de03b133e39b174"} Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.777558 4996 generic.go:334] "Generic (PLEG): container finished" podID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerID="f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3" exitCode=0 Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.777638 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvntc" event={"ID":"f16f7d94-dd7f-4b3a-addf-9780628ac217","Type":"ContainerDied","Data":"f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3"} Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.777712 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kvntc" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.777743 4996 scope.go:117] "RemoveContainer" containerID="f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.777710 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kvntc" event={"ID":"f16f7d94-dd7f-4b3a-addf-9780628ac217","Type":"ContainerDied","Data":"9c40536c5b94a4e0dcb04f2bb7efa73d985fe4abf8344e55eb170192eb2d8a91"} Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.781275 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" event={"ID":"e8534bde-79ad-4654-8a2b-8fa14ee7266b","Type":"ContainerStarted","Data":"4afdbdb557554d04e7258b1111568351fa09808a2540e1a308133a2389ab4401"} Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.781493 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.803543 4996 scope.go:117] "RemoveContainer" containerID="866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.826206 4996 scope.go:117] "RemoveContainer" containerID="6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.856991 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" podStartSLOduration=2.348566217 podStartE2EDuration="6.856972253s" podCreationTimestamp="2026-02-28 09:17:09 +0000 UTC" firstStartedPulling="2026-02-28 09:17:10.369049633 +0000 UTC m=+994.059852444" lastFinishedPulling="2026-02-28 09:17:14.877455669 +0000 UTC m=+998.568258480" observedRunningTime="2026-02-28 09:17:15.851320324 +0000 UTC m=+999.542123125" watchObservedRunningTime="2026-02-28 09:17:15.856972253 +0000 UTC m=+999.547775064" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.866811 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvntc"] Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.870146 4996 scope.go:117] "RemoveContainer" containerID="f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3" Feb 28 09:17:15 crc kubenswrapper[4996]: E0228 09:17:15.870568 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3\": container with ID starting with f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3 not found: ID does not exist" containerID="f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.870596 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3"} err="failed to get container status \"f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3\": rpc error: code = NotFound desc = could not find container \"f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3\": container with ID starting with f84266f50cefc43a54dafd5e3b2f9a19ea995fb2c9ca15771cd62b1951779aa3 not found: ID does not exist" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.870618 4996 scope.go:117] "RemoveContainer" containerID="866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09" Feb 28 09:17:15 crc kubenswrapper[4996]: E0228 09:17:15.870896 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09\": container with ID starting with 866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09 not found: ID does not exist" containerID="866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.870944 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09"} err="failed to get container status \"866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09\": rpc error: code = NotFound desc = could not find container \"866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09\": container with ID starting with 866af6190bd7c8b5044f4fba813391fa5cb36a240ec4b49ab50453d380030a09 not found: ID does not exist" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.870975 4996 scope.go:117] "RemoveContainer" containerID="6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c" Feb 28 09:17:15 crc kubenswrapper[4996]: E0228 09:17:15.871449 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c\": container with ID starting with 6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c not found: ID does not exist" containerID="6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.871510 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c"} err="failed to get container status \"6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c\": rpc error: code = NotFound desc = could not find container \"6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c\": container with ID starting with 6792fdf7b0a15422928a7fa1abf095144c734bd14c2a4268aa6d1e4d401d309c not found: ID does not exist" Feb 28 09:17:15 crc kubenswrapper[4996]: I0228 09:17:15.872352 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kvntc"] Feb 28 09:17:16 crc kubenswrapper[4996]: I0228 09:17:16.930727 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:16 crc kubenswrapper[4996]: I0228 09:17:16.931220 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:16 crc kubenswrapper[4996]: I0228 09:17:16.993697 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:17 crc kubenswrapper[4996]: I0228 09:17:17.048955 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" path="/var/lib/kubelet/pods/f16f7d94-dd7f-4b3a-addf-9780628ac217/volumes" Feb 28 09:17:17 crc kubenswrapper[4996]: I0228 09:17:17.874368 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.005538 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bdzw"] Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.006278 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5bdzw" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" containerName="registry-server" containerID="cri-o://2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e" gracePeriod=2 Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.099825 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-646b94fdfc-pc26j" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.453351 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.550030 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-utilities\") pod \"bcc66c74-d127-4d39-9505-924c7484fc45\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.550293 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-catalog-content\") pod \"bcc66c74-d127-4d39-9505-924c7484fc45\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.550358 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khkcd\" (UniqueName: \"kubernetes.io/projected/bcc66c74-d127-4d39-9505-924c7484fc45-kube-api-access-khkcd\") pod \"bcc66c74-d127-4d39-9505-924c7484fc45\" (UID: \"bcc66c74-d127-4d39-9505-924c7484fc45\") " Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.551124 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-utilities" (OuterVolumeSpecName: "utilities") pod "bcc66c74-d127-4d39-9505-924c7484fc45" (UID: "bcc66c74-d127-4d39-9505-924c7484fc45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.559247 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcc66c74-d127-4d39-9505-924c7484fc45-kube-api-access-khkcd" (OuterVolumeSpecName: "kube-api-access-khkcd") pod "bcc66c74-d127-4d39-9505-924c7484fc45" (UID: "bcc66c74-d127-4d39-9505-924c7484fc45"). InnerVolumeSpecName "kube-api-access-khkcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.607992 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcc66c74-d127-4d39-9505-924c7484fc45" (UID: "bcc66c74-d127-4d39-9505-924c7484fc45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.651559 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.651599 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khkcd\" (UniqueName: \"kubernetes.io/projected/bcc66c74-d127-4d39-9505-924c7484fc45-kube-api-access-khkcd\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.651615 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcc66c74-d127-4d39-9505-924c7484fc45-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.825975 4996 generic.go:334] "Generic (PLEG): container finished" podID="bcc66c74-d127-4d39-9505-924c7484fc45" containerID="2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e" exitCode=0 Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.826031 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bdzw" event={"ID":"bcc66c74-d127-4d39-9505-924c7484fc45","Type":"ContainerDied","Data":"2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e"} Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.826070 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5bdzw" event={"ID":"bcc66c74-d127-4d39-9505-924c7484fc45","Type":"ContainerDied","Data":"14f57866507c2ee5659c25148e9d8f231fd4ce31374b79a4617a0ff528281fec"} Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.826088 4996 scope.go:117] "RemoveContainer" containerID="2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.826093 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5bdzw" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.846133 4996 scope.go:117] "RemoveContainer" containerID="877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.868119 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5bdzw"] Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.873768 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5bdzw"] Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.878235 4996 scope.go:117] "RemoveContainer" containerID="80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.899186 4996 scope.go:117] "RemoveContainer" containerID="2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e" Feb 28 09:17:20 crc kubenswrapper[4996]: E0228 09:17:20.899679 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e\": container with ID starting with 2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e not found: ID does not exist" containerID="2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.899710 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e"} err="failed to get container status \"2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e\": rpc error: code = NotFound desc = could not find container \"2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e\": container with ID starting with 2249dbc1b82d278150c6ca34b143bfae821328d25c0395495990ed3199c3404e not found: ID does not exist" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.899731 4996 scope.go:117] "RemoveContainer" containerID="877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e" Feb 28 09:17:20 crc kubenswrapper[4996]: E0228 09:17:20.900032 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e\": container with ID starting with 877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e not found: ID does not exist" containerID="877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.900070 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e"} err="failed to get container status \"877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e\": rpc error: code = NotFound desc = could not find container \"877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e\": container with ID starting with 877819aa0dfe3269672fb15f02796982af9ca39c93b25249b0069f9585e10c8e not found: ID does not exist" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.900095 4996 scope.go:117] "RemoveContainer" containerID="80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589" Feb 28 09:17:20 crc kubenswrapper[4996]: E0228 09:17:20.900521 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589\": container with ID starting with 80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589 not found: ID does not exist" containerID="80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589" Feb 28 09:17:20 crc kubenswrapper[4996]: I0228 09:17:20.900557 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589"} err="failed to get container status \"80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589\": rpc error: code = NotFound desc = could not find container \"80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589\": container with ID starting with 80553a533fcd7d75e60c163d7fe2b444caa38786a723228bc3a39d9b8aa2d589 not found: ID does not exist" Feb 28 09:17:21 crc kubenswrapper[4996]: I0228 09:17:21.040771 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" path="/var/lib/kubelet/pods/bcc66c74-d127-4d39-9505-924c7484fc45/volumes" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.077313 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx"] Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.078164 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="extract-content" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078181 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="extract-content" Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.078198 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="registry-server" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078206 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="registry-server" Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.078222 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="extract-utilities" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078231 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="extract-utilities" Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.078249 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" containerName="extract-content" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078256 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" containerName="extract-content" Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.078268 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" containerName="extract-utilities" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078276 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" containerName="extract-utilities" Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.078289 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" containerName="registry-server" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078296 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" containerName="registry-server" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078417 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f16f7d94-dd7f-4b3a-addf-9780628ac217" containerName="registry-server" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078446 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcc66c74-d127-4d39-9505-924c7484fc45" containerName="registry-server" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.078929 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.084189 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-gztkx" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.087876 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.089213 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.092160 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-57msc" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.094463 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.107214 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.120311 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.121264 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.123054 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-thlxx" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.151226 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.165167 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.166166 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.166825 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.167407 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.179085 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8r78f" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.179387 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-h745s" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.183050 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.183848 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.191628 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.194534 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xdh72" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.197638 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.204723 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7m2\" (UniqueName: \"kubernetes.io/projected/9369ade1-1b2d-45cf-b376-1963d785be5c-kube-api-access-kd7m2\") pod \"barbican-operator-controller-manager-6db6876945-pppmx\" (UID: \"9369ade1-1b2d-45cf-b376-1963d785be5c\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.204857 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24zx\" (UniqueName: \"kubernetes.io/projected/93f353d8-bbaa-4ec1-b816-d23d58c05ee1-kube-api-access-l24zx\") pod \"cinder-operator-controller-manager-55d77d7b5c-qs4sw\" (UID: \"93f353d8-bbaa-4ec1-b816-d23d58c05ee1\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.204892 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9gcd\" (UniqueName: \"kubernetes.io/projected/996ef81c-b994-461d-a9e0-ec61f8fe65f3-kube-api-access-g9gcd\") pod \"designate-operator-controller-manager-5d87c9d997-bk2ml\" (UID: \"996ef81c-b994-461d-a9e0-ec61f8fe65f3\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.217054 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.232114 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.233121 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.243347 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4hsnc" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.243404 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.247748 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.248512 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.252610 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.260929 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.261070 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.263978 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mcgrn" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.264578 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-s2z9v" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.268630 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.284472 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.300163 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-pp79l"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.300955 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.306101 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-485z5\" (UniqueName: \"kubernetes.io/projected/c8adb771-c22d-4f69-90a5-61cd4a36b618-kube-api-access-485z5\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.306147 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7m2\" (UniqueName: \"kubernetes.io/projected/9369ade1-1b2d-45cf-b376-1963d785be5c-kube-api-access-kd7m2\") pod \"barbican-operator-controller-manager-6db6876945-pppmx\" (UID: \"9369ade1-1b2d-45cf-b376-1963d785be5c\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.306190 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.306219 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn64d\" (UniqueName: \"kubernetes.io/projected/325efd0b-ff17-4ea1-a1d5-c12576259ce5-kube-api-access-qn64d\") pod \"glance-operator-controller-manager-64db6967f8-2bv7j\" (UID: \"325efd0b-ff17-4ea1-a1d5-c12576259ce5\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.306279 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l24zx\" (UniqueName: \"kubernetes.io/projected/93f353d8-bbaa-4ec1-b816-d23d58c05ee1-kube-api-access-l24zx\") pod \"cinder-operator-controller-manager-55d77d7b5c-qs4sw\" (UID: \"93f353d8-bbaa-4ec1-b816-d23d58c05ee1\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.306310 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl96p\" (UniqueName: \"kubernetes.io/projected/9b610086-19c9-4e01-8c4e-dcf6660d749e-kube-api-access-wl96p\") pod \"heat-operator-controller-manager-cf99c678f-4zmmg\" (UID: \"9b610086-19c9-4e01-8c4e-dcf6660d749e\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.306335 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9gcd\" (UniqueName: \"kubernetes.io/projected/996ef81c-b994-461d-a9e0-ec61f8fe65f3-kube-api-access-g9gcd\") pod \"designate-operator-controller-manager-5d87c9d997-bk2ml\" (UID: \"996ef81c-b994-461d-a9e0-ec61f8fe65f3\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.306362 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fghs5\" (UniqueName: \"kubernetes.io/projected/cb2d53e3-ca80-4c1c-8d0b-02caeb753792-kube-api-access-fghs5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-7cdhw\" (UID: \"cb2d53e3-ca80-4c1c-8d0b-02caeb753792\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.307220 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-ffhv8" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.313521 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.314459 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.318060 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-pp79l"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.322871 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9b7gs" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.347116 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7m2\" (UniqueName: \"kubernetes.io/projected/9369ade1-1b2d-45cf-b376-1963d785be5c-kube-api-access-kd7m2\") pod \"barbican-operator-controller-manager-6db6876945-pppmx\" (UID: \"9369ade1-1b2d-45cf-b376-1963d785be5c\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.347490 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9gcd\" (UniqueName: \"kubernetes.io/projected/996ef81c-b994-461d-a9e0-ec61f8fe65f3-kube-api-access-g9gcd\") pod \"designate-operator-controller-manager-5d87c9d997-bk2ml\" (UID: \"996ef81c-b994-461d-a9e0-ec61f8fe65f3\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.361078 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.370442 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l24zx\" (UniqueName: \"kubernetes.io/projected/93f353d8-bbaa-4ec1-b816-d23d58c05ee1-kube-api-access-l24zx\") pod \"cinder-operator-controller-manager-55d77d7b5c-qs4sw\" (UID: \"93f353d8-bbaa-4ec1-b816-d23d58c05ee1\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.375888 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.376666 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.384367 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-f9vwm" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.400075 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.406105 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.407297 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408439 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpzxs\" (UniqueName: \"kubernetes.io/projected/de3f6975-8417-4db2-9d04-5364f4127334-kube-api-access-zpzxs\") pod \"mariadb-operator-controller-manager-7b6bfb6475-zmcjg\" (UID: \"de3f6975-8417-4db2-9d04-5364f4127334\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408472 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408494 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn64d\" (UniqueName: \"kubernetes.io/projected/325efd0b-ff17-4ea1-a1d5-c12576259ce5-kube-api-access-qn64d\") pod \"glance-operator-controller-manager-64db6967f8-2bv7j\" (UID: \"325efd0b-ff17-4ea1-a1d5-c12576259ce5\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408520 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twzdw\" (UniqueName: \"kubernetes.io/projected/9d5331d1-e5df-4b2c-8663-5fe6afc00995-kube-api-access-twzdw\") pod \"manila-operator-controller-manager-67d996989d-pp79l\" (UID: \"9d5331d1-e5df-4b2c-8663-5fe6afc00995\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408565 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl96p\" (UniqueName: \"kubernetes.io/projected/9b610086-19c9-4e01-8c4e-dcf6660d749e-kube-api-access-wl96p\") pod \"heat-operator-controller-manager-cf99c678f-4zmmg\" (UID: \"9b610086-19c9-4e01-8c4e-dcf6660d749e\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408595 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm4kj\" (UniqueName: \"kubernetes.io/projected/5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3-kube-api-access-dm4kj\") pod \"keystone-operator-controller-manager-7c789f89c6-clblk\" (UID: \"5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408611 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rq72\" (UniqueName: \"kubernetes.io/projected/07d97cb5-6c6a-4d30-9454-8c13b5fc9adc-kube-api-access-8rq72\") pod \"ironic-operator-controller-manager-545456dc4-6kcm4\" (UID: \"07d97cb5-6c6a-4d30-9454-8c13b5fc9adc\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408628 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fghs5\" (UniqueName: \"kubernetes.io/projected/cb2d53e3-ca80-4c1c-8d0b-02caeb753792-kube-api-access-fghs5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-7cdhw\" (UID: \"cb2d53e3-ca80-4c1c-8d0b-02caeb753792\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.408660 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-485z5\" (UniqueName: \"kubernetes.io/projected/c8adb771-c22d-4f69-90a5-61cd4a36b618-kube-api-access-485z5\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.408885 4996 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.408930 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert podName:c8adb771-c22d-4f69-90a5-61cd4a36b618 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:40.908915633 +0000 UTC m=+1024.599718434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert") pod "infra-operator-controller-manager-f7fcc58b9-x6bwt" (UID: "c8adb771-c22d-4f69-90a5-61cd4a36b618") : secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.409343 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-q964z" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.410063 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.410877 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.413536 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-89lmz" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.415174 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.430558 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn64d\" (UniqueName: \"kubernetes.io/projected/325efd0b-ff17-4ea1-a1d5-c12576259ce5-kube-api-access-qn64d\") pod \"glance-operator-controller-manager-64db6967f8-2bv7j\" (UID: \"325efd0b-ff17-4ea1-a1d5-c12576259ce5\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.456306 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.460930 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl96p\" (UniqueName: \"kubernetes.io/projected/9b610086-19c9-4e01-8c4e-dcf6660d749e-kube-api-access-wl96p\") pod \"heat-operator-controller-manager-cf99c678f-4zmmg\" (UID: \"9b610086-19c9-4e01-8c4e-dcf6660d749e\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.463318 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.475305 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-485z5\" (UniqueName: \"kubernetes.io/projected/c8adb771-c22d-4f69-90a5-61cd4a36b618-kube-api-access-485z5\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.475953 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fghs5\" (UniqueName: \"kubernetes.io/projected/cb2d53e3-ca80-4c1c-8d0b-02caeb753792-kube-api-access-fghs5\") pod \"horizon-operator-controller-manager-78bc7f9bd9-7cdhw\" (UID: \"cb2d53e3-ca80-4c1c-8d0b-02caeb753792\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.483072 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.495231 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.497540 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.509227 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.509825 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8dp\" (UniqueName: \"kubernetes.io/projected/a4e35f97-45f5-457f-bc93-86536fcbee68-kube-api-access-2g8dp\") pod \"nova-operator-controller-manager-74b6b5dc96-d5qml\" (UID: \"a4e35f97-45f5-457f-bc93-86536fcbee68\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.509887 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm4kj\" (UniqueName: \"kubernetes.io/projected/5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3-kube-api-access-dm4kj\") pod \"keystone-operator-controller-manager-7c789f89c6-clblk\" (UID: \"5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.509907 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rq72\" (UniqueName: \"kubernetes.io/projected/07d97cb5-6c6a-4d30-9454-8c13b5fc9adc-kube-api-access-8rq72\") pod \"ironic-operator-controller-manager-545456dc4-6kcm4\" (UID: \"07d97cb5-6c6a-4d30-9454-8c13b5fc9adc\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.509946 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpzxs\" (UniqueName: \"kubernetes.io/projected/de3f6975-8417-4db2-9d04-5364f4127334-kube-api-access-zpzxs\") pod \"mariadb-operator-controller-manager-7b6bfb6475-zmcjg\" (UID: \"de3f6975-8417-4db2-9d04-5364f4127334\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.509994 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46lx\" (UniqueName: \"kubernetes.io/projected/bd32451f-7a7d-429f-906f-d98e355c1abf-kube-api-access-c46lx\") pod \"octavia-operator-controller-manager-5d86c7ddb7-tdl5w\" (UID: \"bd32451f-7a7d-429f-906f-d98e355c1abf\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.510980 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6xdt\" (UniqueName: \"kubernetes.io/projected/84adbefa-8503-41bf-8b9b-662b08251cff-kube-api-access-w6xdt\") pod \"neutron-operator-controller-manager-54688575f-kx6ht\" (UID: \"84adbefa-8503-41bf-8b9b-662b08251cff\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.511575 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twzdw\" (UniqueName: \"kubernetes.io/projected/9d5331d1-e5df-4b2c-8663-5fe6afc00995-kube-api-access-twzdw\") pod \"manila-operator-controller-manager-67d996989d-pp79l\" (UID: \"9d5331d1-e5df-4b2c-8663-5fe6afc00995\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.510256 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.512966 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.513791 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.514431 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4nmq6" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.515304 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.517480 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-6snhz" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.517725 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.518740 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.519590 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.525321 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-24n96" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.525588 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.528971 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rq72\" (UniqueName: \"kubernetes.io/projected/07d97cb5-6c6a-4d30-9454-8c13b5fc9adc-kube-api-access-8rq72\") pod \"ironic-operator-controller-manager-545456dc4-6kcm4\" (UID: \"07d97cb5-6c6a-4d30-9454-8c13b5fc9adc\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.529678 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.533235 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm4kj\" (UniqueName: \"kubernetes.io/projected/5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3-kube-api-access-dm4kj\") pod \"keystone-operator-controller-manager-7c789f89c6-clblk\" (UID: \"5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.535897 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.536709 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twzdw\" (UniqueName: \"kubernetes.io/projected/9d5331d1-e5df-4b2c-8663-5fe6afc00995-kube-api-access-twzdw\") pod \"manila-operator-controller-manager-67d996989d-pp79l\" (UID: \"9d5331d1-e5df-4b2c-8663-5fe6afc00995\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.542441 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpzxs\" (UniqueName: \"kubernetes.io/projected/de3f6975-8417-4db2-9d04-5364f4127334-kube-api-access-zpzxs\") pod \"mariadb-operator-controller-manager-7b6bfb6475-zmcjg\" (UID: \"de3f6975-8417-4db2-9d04-5364f4127334\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.552455 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.556795 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.557821 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.562717 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-vsjs2" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.571739 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.589059 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.590082 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.591962 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gkkx6" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.593800 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.601137 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.617113 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.617843 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.617902 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9rh5\" (UniqueName: \"kubernetes.io/projected/3a0ccc77-1ced-4c14-a1ac-18523be0afd4-kube-api-access-t9rh5\") pod \"placement-operator-controller-manager-648564c9fc-q59cv\" (UID: \"3a0ccc77-1ced-4c14-a1ac-18523be0afd4\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.617939 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqq9d\" (UniqueName: \"kubernetes.io/projected/b503546b-54b6-4133-8d44-6a162ef54232-kube-api-access-pqq9d\") pod \"swift-operator-controller-manager-9b9ff9f4d-c9v26\" (UID: \"b503546b-54b6-4133-8d44-6a162ef54232\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.617963 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c46lx\" (UniqueName: \"kubernetes.io/projected/bd32451f-7a7d-429f-906f-d98e355c1abf-kube-api-access-c46lx\") pod \"octavia-operator-controller-manager-5d86c7ddb7-tdl5w\" (UID: \"bd32451f-7a7d-429f-906f-d98e355c1abf\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.617972 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.617980 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6xdt\" (UniqueName: \"kubernetes.io/projected/84adbefa-8503-41bf-8b9b-662b08251cff-kube-api-access-w6xdt\") pod \"neutron-operator-controller-manager-54688575f-kx6ht\" (UID: \"84adbefa-8503-41bf-8b9b-662b08251cff\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.617999 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krvbc\" (UniqueName: \"kubernetes.io/projected/8a886fa9-0abd-4197-9a18-09f20f403ef4-kube-api-access-krvbc\") pod \"ovn-operator-controller-manager-75684d597f-wwzjf\" (UID: \"8a886fa9-0abd-4197-9a18-09f20f403ef4\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.618033 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9fzs\" (UniqueName: \"kubernetes.io/projected/2de35814-cd78-4178-8b32-1fbd89de94b4-kube-api-access-v9fzs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.618062 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8dp\" (UniqueName: \"kubernetes.io/projected/a4e35f97-45f5-457f-bc93-86536fcbee68-kube-api-access-2g8dp\") pod \"nova-operator-controller-manager-74b6b5dc96-d5qml\" (UID: \"a4e35f97-45f5-457f-bc93-86536fcbee68\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.619344 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.629179 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.631135 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zgsk7" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.640145 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.640411 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c46lx\" (UniqueName: \"kubernetes.io/projected/bd32451f-7a7d-429f-906f-d98e355c1abf-kube-api-access-c46lx\") pod \"octavia-operator-controller-manager-5d86c7ddb7-tdl5w\" (UID: \"bd32451f-7a7d-429f-906f-d98e355c1abf\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.644490 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8dp\" (UniqueName: \"kubernetes.io/projected/a4e35f97-45f5-457f-bc93-86536fcbee68-kube-api-access-2g8dp\") pod \"nova-operator-controller-manager-74b6b5dc96-d5qml\" (UID: \"a4e35f97-45f5-457f-bc93-86536fcbee68\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.662861 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6xdt\" (UniqueName: \"kubernetes.io/projected/84adbefa-8503-41bf-8b9b-662b08251cff-kube-api-access-w6xdt\") pod \"neutron-operator-controller-manager-54688575f-kx6ht\" (UID: \"84adbefa-8503-41bf-8b9b-662b08251cff\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.692349 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.692839 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.693693 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.699949 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.703839 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sf245" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.712740 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.721706 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krvbc\" (UniqueName: \"kubernetes.io/projected/8a886fa9-0abd-4197-9a18-09f20f403ef4-kube-api-access-krvbc\") pod \"ovn-operator-controller-manager-75684d597f-wwzjf\" (UID: \"8a886fa9-0abd-4197-9a18-09f20f403ef4\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.721780 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9fzs\" (UniqueName: \"kubernetes.io/projected/2de35814-cd78-4178-8b32-1fbd89de94b4-kube-api-access-v9fzs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.721826 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6srn\" (UniqueName: \"kubernetes.io/projected/2d7f8619-4576-4fb4-83e1-73ebe232a06d-kube-api-access-l6srn\") pod \"telemetry-operator-controller-manager-5fdb694969-f88sn\" (UID: \"2d7f8619-4576-4fb4-83e1-73ebe232a06d\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.721864 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z47w\" (UniqueName: \"kubernetes.io/projected/e4770c19-1759-4f93-88ea-696d28d6b149-kube-api-access-6z47w\") pod \"test-operator-controller-manager-655d95ddc7-xxt4d\" (UID: \"e4770c19-1759-4f93-88ea-696d28d6b149\") " pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.721905 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.721979 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9rh5\" (UniqueName: \"kubernetes.io/projected/3a0ccc77-1ced-4c14-a1ac-18523be0afd4-kube-api-access-t9rh5\") pod \"placement-operator-controller-manager-648564c9fc-q59cv\" (UID: \"3a0ccc77-1ced-4c14-a1ac-18523be0afd4\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.722036 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqq9d\" (UniqueName: \"kubernetes.io/projected/b503546b-54b6-4133-8d44-6a162ef54232-kube-api-access-pqq9d\") pod \"swift-operator-controller-manager-9b9ff9f4d-c9v26\" (UID: \"b503546b-54b6-4133-8d44-6a162ef54232\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.724747 4996 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.724824 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert podName:2de35814-cd78-4178-8b32-1fbd89de94b4 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:41.224778821 +0000 UTC m=+1024.915581632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" (UID: "2de35814-cd78-4178-8b32-1fbd89de94b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.725658 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.746628 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqq9d\" (UniqueName: \"kubernetes.io/projected/b503546b-54b6-4133-8d44-6a162ef54232-kube-api-access-pqq9d\") pod \"swift-operator-controller-manager-9b9ff9f4d-c9v26\" (UID: \"b503546b-54b6-4133-8d44-6a162ef54232\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.748280 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9rh5\" (UniqueName: \"kubernetes.io/projected/3a0ccc77-1ced-4c14-a1ac-18523be0afd4-kube-api-access-t9rh5\") pod \"placement-operator-controller-manager-648564c9fc-q59cv\" (UID: \"3a0ccc77-1ced-4c14-a1ac-18523be0afd4\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.749072 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9fzs\" (UniqueName: \"kubernetes.io/projected/2de35814-cd78-4178-8b32-1fbd89de94b4-kube-api-access-v9fzs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.753545 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krvbc\" (UniqueName: \"kubernetes.io/projected/8a886fa9-0abd-4197-9a18-09f20f403ef4-kube-api-access-krvbc\") pod \"ovn-operator-controller-manager-75684d597f-wwzjf\" (UID: \"8a886fa9-0abd-4197-9a18-09f20f403ef4\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.795355 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.796757 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.799444 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.799452 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.803084 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-77q9g" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.811492 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.822861 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6srn\" (UniqueName: \"kubernetes.io/projected/2d7f8619-4576-4fb4-83e1-73ebe232a06d-kube-api-access-l6srn\") pod \"telemetry-operator-controller-manager-5fdb694969-f88sn\" (UID: \"2d7f8619-4576-4fb4-83e1-73ebe232a06d\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.822954 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z47w\" (UniqueName: \"kubernetes.io/projected/e4770c19-1759-4f93-88ea-696d28d6b149-kube-api-access-6z47w\") pod \"test-operator-controller-manager-655d95ddc7-xxt4d\" (UID: \"e4770c19-1759-4f93-88ea-696d28d6b149\") " pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.823164 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68n2\" (UniqueName: \"kubernetes.io/projected/b1a21e4c-eb15-4914-9366-45a0bc6f2e3d-kube-api-access-l68n2\") pod \"watcher-operator-controller-manager-bccc79885-qzzwv\" (UID: \"b1a21e4c-eb15-4914-9366-45a0bc6f2e3d\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.831648 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.842419 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.845943 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z47w\" (UniqueName: \"kubernetes.io/projected/e4770c19-1759-4f93-88ea-696d28d6b149-kube-api-access-6z47w\") pod \"test-operator-controller-manager-655d95ddc7-xxt4d\" (UID: \"e4770c19-1759-4f93-88ea-696d28d6b149\") " pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.847805 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6srn\" (UniqueName: \"kubernetes.io/projected/2d7f8619-4576-4fb4-83e1-73ebe232a06d-kube-api-access-l6srn\") pod \"telemetry-operator-controller-manager-5fdb694969-f88sn\" (UID: \"2d7f8619-4576-4fb4-83e1-73ebe232a06d\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.872298 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.873877 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.877235 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-5n8jf" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.910358 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k"] Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.927616 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.928364 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.928425 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.928455 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l68n2\" (UniqueName: \"kubernetes.io/projected/b1a21e4c-eb15-4914-9366-45a0bc6f2e3d-kube-api-access-l68n2\") pod \"watcher-operator-controller-manager-bccc79885-qzzwv\" (UID: \"b1a21e4c-eb15-4914-9366-45a0bc6f2e3d\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.928537 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.928583 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqskm\" (UniqueName: \"kubernetes.io/projected/36f7fdcf-d295-4ee0-9155-fbd3dc0d1234-kube-api-access-dqskm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tn67k\" (UID: \"36f7fdcf-d295-4ee0-9155-fbd3dc0d1234\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.928612 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsmpw\" (UniqueName: \"kubernetes.io/projected/143a07a9-b2e4-4b4b-9328-a3feee140c26-kube-api-access-zsmpw\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.928857 4996 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:40 crc kubenswrapper[4996]: E0228 09:17:40.928904 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert podName:c8adb771-c22d-4f69-90a5-61cd4a36b618 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:41.928889597 +0000 UTC m=+1025.619692408 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert") pod "infra-operator-controller-manager-f7fcc58b9-x6bwt" (UID: "c8adb771-c22d-4f69-90a5-61cd4a36b618") : secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.957381 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" Feb 28 09:17:40 crc kubenswrapper[4996]: I0228 09:17:40.976887 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68n2\" (UniqueName: \"kubernetes.io/projected/b1a21e4c-eb15-4914-9366-45a0bc6f2e3d-kube-api-access-l68n2\") pod \"watcher-operator-controller-manager-bccc79885-qzzwv\" (UID: \"b1a21e4c-eb15-4914-9366-45a0bc6f2e3d\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.007999 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.034092 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.034155 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqskm\" (UniqueName: \"kubernetes.io/projected/36f7fdcf-d295-4ee0-9155-fbd3dc0d1234-kube-api-access-dqskm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tn67k\" (UID: \"36f7fdcf-d295-4ee0-9155-fbd3dc0d1234\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.034191 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsmpw\" (UniqueName: \"kubernetes.io/projected/143a07a9-b2e4-4b4b-9328-a3feee140c26-kube-api-access-zsmpw\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.034220 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.034400 4996 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.034456 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:41.534439046 +0000 UTC m=+1025.225241857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "webhook-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.034786 4996 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.034827 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:41.534815535 +0000 UTC m=+1025.225618346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "metrics-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.042570 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.055367 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.075770 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqskm\" (UniqueName: \"kubernetes.io/projected/36f7fdcf-d295-4ee0-9155-fbd3dc0d1234-kube-api-access-dqskm\") pod \"rabbitmq-cluster-operator-manager-668c99d594-tn67k\" (UID: \"36f7fdcf-d295-4ee0-9155-fbd3dc0d1234\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.079338 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsmpw\" (UniqueName: \"kubernetes.io/projected/143a07a9-b2e4-4b4b-9328-a3feee140c26-kube-api-access-zsmpw\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.236833 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.237058 4996 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.237246 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert podName:2de35814-cd78-4178-8b32-1fbd89de94b4 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:42.237231019 +0000 UTC m=+1025.928033830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" (UID: "2de35814-cd78-4178-8b32-1fbd89de94b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.316364 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.542353 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.542401 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.542541 4996 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.542593 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:42.542578647 +0000 UTC m=+1026.233381458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "webhook-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.542912 4996 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.542940 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:42.542933525 +0000 UTC m=+1026.233736336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "metrics-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.807967 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx"] Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.960087 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.960250 4996 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: E0228 09:17:41.960332 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert podName:c8adb771-c22d-4f69-90a5-61cd4a36b618 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:43.960312723 +0000 UTC m=+1027.651115534 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert") pod "infra-operator-controller-manager-f7fcc58b9-x6bwt" (UID: "c8adb771-c22d-4f69-90a5-61cd4a36b618") : secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:41 crc kubenswrapper[4996]: I0228 09:17:41.980213 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" event={"ID":"9369ade1-1b2d-45cf-b376-1963d785be5c","Type":"ContainerStarted","Data":"d7f0dae64bd22a36d40baf70f71280f77d5e9e1417379225c0b885ba39614c33"} Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.100579 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.111965 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk"] Feb 28 09:17:42 crc kubenswrapper[4996]: W0228 09:17:42.139477 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod996ef81c_b994_461d_a9e0_ec61f8fe65f3.slice/crio-806eb2930f53f5dff05a3eeb1d8b63161fcb62ea5efb3db110f89a3b77ff1afb WatchSource:0}: Error finding container 806eb2930f53f5dff05a3eeb1d8b63161fcb62ea5efb3db110f89a3b77ff1afb: Status 404 returned error can't find the container with id 806eb2930f53f5dff05a3eeb1d8b63161fcb62ea5efb3db110f89a3b77ff1afb Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.167293 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.173462 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.179288 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.188810 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.195540 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-pp79l"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.214349 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.220702 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.224973 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.229906 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.233957 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.238882 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4"] Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.241667 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8rq72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-545456dc4-6kcm4_openstack-operators(07d97cb5-6c6a-4d30-9454-8c13b5fc9adc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.241765 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t9rh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-648564c9fc-q59cv_openstack-operators(3a0ccc77-1ced-4c14-a1ac-18523be0afd4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.242745 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" podUID="07d97cb5-6c6a-4d30-9454-8c13b5fc9adc" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.242824 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" podUID="3a0ccc77-1ced-4c14-a1ac-18523be0afd4" Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.243174 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.247422 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg"] Feb 28 09:17:42 crc kubenswrapper[4996]: W0228 09:17:42.251282 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde3f6975_8417_4db2_9d04_5364f4127334.slice/crio-ca7b7df8adca9b3d62dc186a32bad38025836349667cc0a7c64d8e2492b1b9eb WatchSource:0}: Error finding container ca7b7df8adca9b3d62dc186a32bad38025836349667cc0a7c64d8e2492b1b9eb: Status 404 returned error can't find the container with id ca7b7df8adca9b3d62dc186a32bad38025836349667cc0a7c64d8e2492b1b9eb Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.251597 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k"] Feb 28 09:17:42 crc kubenswrapper[4996]: W0228 09:17:42.252155 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a886fa9_0abd_4197_9a18_09f20f403ef4.slice/crio-f1a4278662e466050fd6f3d0d413dc286eb0bcaadc91bb096c85e9b715074b6d WatchSource:0}: Error finding container f1a4278662e466050fd6f3d0d413dc286eb0bcaadc91bb096c85e9b715074b6d: Status 404 returned error can't find the container with id f1a4278662e466050fd6f3d0d413dc286eb0bcaadc91bb096c85e9b715074b6d Feb 28 09:17:42 crc kubenswrapper[4996]: W0228 09:17:42.252936 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f7fdcf_d295_4ee0_9155_fbd3dc0d1234.slice/crio-443dd07b5f953008c338aa2598a2e0ba04b1638165b357aa1a4f1552813db394 WatchSource:0}: Error finding container 443dd07b5f953008c338aa2598a2e0ba04b1638165b357aa1a4f1552813db394: Status 404 returned error can't find the container with id 443dd07b5f953008c338aa2598a2e0ba04b1638165b357aa1a4f1552813db394 Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.257401 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn"] Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.259838 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv"] Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.271384 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krvbc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-wwzjf_openstack-operators(8a886fa9-0abd-4197-9a18-09f20f403ef4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.272510 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" podUID="8a886fa9-0abd-4197-9a18-09f20f403ef4" Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.274599 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.274758 4996 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.274822 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert podName:2de35814-cd78-4178-8b32-1fbd89de94b4 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:44.274788587 +0000 UTC m=+1027.965591398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" (UID: "2de35814-cd78-4178-8b32-1fbd89de94b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.275071 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqskm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-tn67k_openstack-operators(36f7fdcf-d295-4ee0-9155-fbd3dc0d1234): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.275172 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l68n2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-qzzwv_openstack-operators(b1a21e4c-eb15-4914-9366-45a0bc6f2e3d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.276543 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" podUID="b1a21e4c-eb15-4914-9366-45a0bc6f2e3d" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.276590 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" podUID="36f7fdcf-d295-4ee0-9155-fbd3dc0d1234" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.278484 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpzxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-7b6bfb6475-zmcjg_openstack-operators(de3f6975-8417-4db2-9d04-5364f4127334): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.278493 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6srn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fdb694969-f88sn_openstack-operators(2d7f8619-4576-4fb4-83e1-73ebe232a06d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.279532 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.103:5001/openstack-k8s-operators/test-operator:a814a5a671bcd1b4c812ef56c7666350580d6756,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6z47w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-655d95ddc7-xxt4d_openstack-operators(e4770c19-1759-4f93-88ea-696d28d6b149): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.279588 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" podUID="2d7f8619-4576-4fb4-83e1-73ebe232a06d" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.279612 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" podUID="de3f6975-8417-4db2-9d04-5364f4127334" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.280848 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" podUID="e4770c19-1759-4f93-88ea-696d28d6b149" Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.314052 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26"] Feb 28 09:17:42 crc kubenswrapper[4996]: W0228 09:17:42.321833 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb503546b_54b6_4133_8d44_6a162ef54232.slice/crio-236c6d8a2e105de921a8306f3b5ad703631b5317828de4467d0a1764586b4476 WatchSource:0}: Error finding container 236c6d8a2e105de921a8306f3b5ad703631b5317828de4467d0a1764586b4476: Status 404 returned error can't find the container with id 236c6d8a2e105de921a8306f3b5ad703631b5317828de4467d0a1764586b4476 Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.580975 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.581200 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.581132 4996 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.581707 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:44.581684203 +0000 UTC m=+1028.272487034 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "metrics-server-cert" not found Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.581367 4996 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.581810 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:44.581777496 +0000 UTC m=+1028.272580397 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "webhook-server-cert" not found Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.993379 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" event={"ID":"07d97cb5-6c6a-4d30-9454-8c13b5fc9adc","Type":"ContainerStarted","Data":"f7ec84811d168206c94d7c32cedaed1b73cbc4b0ceb97ae9af9a6baf62cbd67c"} Feb 28 09:17:42 crc kubenswrapper[4996]: E0228 09:17:42.995135 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" podUID="07d97cb5-6c6a-4d30-9454-8c13b5fc9adc" Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.995200 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" event={"ID":"b503546b-54b6-4133-8d44-6a162ef54232","Type":"ContainerStarted","Data":"236c6d8a2e105de921a8306f3b5ad703631b5317828de4467d0a1764586b4476"} Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.998073 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" event={"ID":"bd32451f-7a7d-429f-906f-d98e355c1abf","Type":"ContainerStarted","Data":"740b52c7ba04bbdd5f51c13aefddce6ea97e063a0d66e7fd230d1269e7dfabe5"} Feb 28 09:17:42 crc kubenswrapper[4996]: I0228 09:17:42.999337 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" event={"ID":"84adbefa-8503-41bf-8b9b-662b08251cff","Type":"ContainerStarted","Data":"cf384be71e320b4328d1a772a8815fd1f80c7a9b78fa9fe525e9a13f09d79aa1"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.001515 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" event={"ID":"8a886fa9-0abd-4197-9a18-09f20f403ef4","Type":"ContainerStarted","Data":"f1a4278662e466050fd6f3d0d413dc286eb0bcaadc91bb096c85e9b715074b6d"} Feb 28 09:17:43 crc kubenswrapper[4996]: E0228 09:17:43.002790 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" podUID="8a886fa9-0abd-4197-9a18-09f20f403ef4" Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.003225 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" event={"ID":"9d5331d1-e5df-4b2c-8663-5fe6afc00995","Type":"ContainerStarted","Data":"03ab2e1ab904fbd6482134e04d1e93206fcaab6e97dd4544caa9d02baee74728"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.005273 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" event={"ID":"2d7f8619-4576-4fb4-83e1-73ebe232a06d","Type":"ContainerStarted","Data":"ad30ce3e01d7d8279e49b90c80e14ddcb779d689fb097ad61ca0e1cf22367250"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.007446 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" event={"ID":"3a0ccc77-1ced-4c14-a1ac-18523be0afd4","Type":"ContainerStarted","Data":"615b7416995118508bbd4ec6d34c904ef81359d9f55dcc21b8d002be62c8712d"} Feb 28 09:17:43 crc kubenswrapper[4996]: E0228 09:17:43.009644 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" podUID="3a0ccc77-1ced-4c14-a1ac-18523be0afd4" Feb 28 09:17:43 crc kubenswrapper[4996]: E0228 09:17:43.010728 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" podUID="2d7f8619-4576-4fb4-83e1-73ebe232a06d" Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.016776 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" event={"ID":"b1a21e4c-eb15-4914-9366-45a0bc6f2e3d","Type":"ContainerStarted","Data":"93480c347b4792cd5722db22923878046df428bf220ef60ed9346debe8e13110"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.019497 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" event={"ID":"93f353d8-bbaa-4ec1-b816-d23d58c05ee1","Type":"ContainerStarted","Data":"e1526e69dfc22e9f613596b9bde825e88a6518355a9bc8378f2d1af83d0a8760"} Feb 28 09:17:43 crc kubenswrapper[4996]: E0228 09:17:43.019656 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" podUID="b1a21e4c-eb15-4914-9366-45a0bc6f2e3d" Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.021299 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" event={"ID":"325efd0b-ff17-4ea1-a1d5-c12576259ce5","Type":"ContainerStarted","Data":"f500d2a2f94ffff7acfbac2266e147d01dfb6e5ee316881c3e89c86bfc5c326a"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.025250 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" event={"ID":"5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3","Type":"ContainerStarted","Data":"d8025cbdd845b4a975f21a348de35f3b97c40d03a1239bf27efb5629808355e9"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.036431 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" event={"ID":"996ef81c-b994-461d-a9e0-ec61f8fe65f3","Type":"ContainerStarted","Data":"806eb2930f53f5dff05a3eeb1d8b63161fcb62ea5efb3db110f89a3b77ff1afb"} Feb 28 09:17:43 crc kubenswrapper[4996]: E0228 09:17:43.052869 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" podUID="36f7fdcf-d295-4ee0-9155-fbd3dc0d1234" Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.054054 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" event={"ID":"36f7fdcf-d295-4ee0-9155-fbd3dc0d1234","Type":"ContainerStarted","Data":"443dd07b5f953008c338aa2598a2e0ba04b1638165b357aa1a4f1552813db394"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.054108 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" event={"ID":"cb2d53e3-ca80-4c1c-8d0b-02caeb753792","Type":"ContainerStarted","Data":"973a05c944a9fb9355eab2233ec34c4d84375dd067c59880b50dfac2571e6b60"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.054766 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" event={"ID":"a4e35f97-45f5-457f-bc93-86536fcbee68","Type":"ContainerStarted","Data":"4af4421917cb766059750c47345dc217779da2b3c64368070b48e0451123a766"} Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.065714 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" event={"ID":"e4770c19-1759-4f93-88ea-696d28d6b149","Type":"ContainerStarted","Data":"f698e3bd9adc1204248c752f583a2c8a95969741119e83056b8f52c58809d3fb"} Feb 28 09:17:43 crc kubenswrapper[4996]: E0228 09:17:43.068316 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/openstack-k8s-operators/test-operator:a814a5a671bcd1b4c812ef56c7666350580d6756\\\"\"" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" podUID="e4770c19-1759-4f93-88ea-696d28d6b149" Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.080602 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" event={"ID":"de3f6975-8417-4db2-9d04-5364f4127334","Type":"ContainerStarted","Data":"ca7b7df8adca9b3d62dc186a32bad38025836349667cc0a7c64d8e2492b1b9eb"} Feb 28 09:17:43 crc kubenswrapper[4996]: E0228 09:17:43.081728 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" podUID="de3f6975-8417-4db2-9d04-5364f4127334" Feb 28 09:17:43 crc kubenswrapper[4996]: I0228 09:17:43.082157 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" event={"ID":"9b610086-19c9-4e01-8c4e-dcf6660d749e","Type":"ContainerStarted","Data":"3847469feeddfaf9bcb18c5fcaf286c5aac3ac3240a97e0da72e77a629f744a8"} Feb 28 09:17:44 crc kubenswrapper[4996]: I0228 09:17:44.007210 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.007376 4996 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.007569 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert podName:c8adb771-c22d-4f69-90a5-61cd4a36b618 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:48.00755387 +0000 UTC m=+1031.698356681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert") pod "infra-operator-controller-manager-f7fcc58b9-x6bwt" (UID: "c8adb771-c22d-4f69-90a5-61cd4a36b618") : secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.109304 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" podUID="b1a21e4c-eb15-4914-9366-45a0bc6f2e3d" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.109547 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" podUID="36f7fdcf-d295-4ee0-9155-fbd3dc0d1234" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.109593 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" podUID="2d7f8619-4576-4fb4-83e1-73ebe232a06d" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.109950 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.103:5001/openstack-k8s-operators/test-operator:a814a5a671bcd1b4c812ef56c7666350580d6756\\\"\"" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" podUID="e4770c19-1759-4f93-88ea-696d28d6b149" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.110033 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" podUID="8a886fa9-0abd-4197-9a18-09f20f403ef4" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.110087 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" podUID="3a0ccc77-1ced-4c14-a1ac-18523be0afd4" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.110149 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:e41dfadd2c3bbcae29f8c43cd2feea6724a48cdef127d65d1d37816bb9945a01\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" podUID="07d97cb5-6c6a-4d30-9454-8c13b5fc9adc" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.115847 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:5592ec4a6fbe2c832d1828b51af0b907e5d733d478b6f378a9b2f6d6cf0ac505\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" podUID="de3f6975-8417-4db2-9d04-5364f4127334" Feb 28 09:17:44 crc kubenswrapper[4996]: I0228 09:17:44.314531 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.314865 4996 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.314965 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert podName:2de35814-cd78-4178-8b32-1fbd89de94b4 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:48.314949809 +0000 UTC m=+1032.005752620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" (UID: "2de35814-cd78-4178-8b32-1fbd89de94b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:44 crc kubenswrapper[4996]: I0228 09:17:44.619558 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:44 crc kubenswrapper[4996]: I0228 09:17:44.619684 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.619792 4996 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.619840 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:48.619825696 +0000 UTC m=+1032.310628497 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "metrics-server-cert" not found Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.620145 4996 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 09:17:44 crc kubenswrapper[4996]: E0228 09:17:44.620168 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:48.620161984 +0000 UTC m=+1032.310964795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "webhook-server-cert" not found Feb 28 09:17:48 crc kubenswrapper[4996]: I0228 09:17:48.075875 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:48 crc kubenswrapper[4996]: E0228 09:17:48.076097 4996 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:48 crc kubenswrapper[4996]: E0228 09:17:48.076293 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert podName:c8adb771-c22d-4f69-90a5-61cd4a36b618 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:56.076276138 +0000 UTC m=+1039.767078949 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert") pod "infra-operator-controller-manager-f7fcc58b9-x6bwt" (UID: "c8adb771-c22d-4f69-90a5-61cd4a36b618") : secret "infra-operator-webhook-server-cert" not found Feb 28 09:17:48 crc kubenswrapper[4996]: I0228 09:17:48.387652 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:48 crc kubenswrapper[4996]: E0228 09:17:48.387907 4996 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:48 crc kubenswrapper[4996]: E0228 09:17:48.388282 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert podName:2de35814-cd78-4178-8b32-1fbd89de94b4 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:56.387999674 +0000 UTC m=+1040.078802475 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" (UID: "2de35814-cd78-4178-8b32-1fbd89de94b4") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 09:17:48 crc kubenswrapper[4996]: I0228 09:17:48.690959 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:48 crc kubenswrapper[4996]: E0228 09:17:48.691159 4996 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 09:17:48 crc kubenswrapper[4996]: E0228 09:17:48.691535 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:56.691512277 +0000 UTC m=+1040.382315098 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "metrics-server-cert" not found Feb 28 09:17:48 crc kubenswrapper[4996]: I0228 09:17:48.691439 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:48 crc kubenswrapper[4996]: E0228 09:17:48.691562 4996 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 09:17:48 crc kubenswrapper[4996]: E0228 09:17:48.691627 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:17:56.691607189 +0000 UTC m=+1040.382410090 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "webhook-server-cert" not found Feb 28 09:17:55 crc kubenswrapper[4996]: E0228 09:17:55.688968 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" Feb 28 09:17:55 crc kubenswrapper[4996]: E0228 09:17:55.689762 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dm4kj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7c789f89c6-clblk_openstack-operators(5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:17:55 crc kubenswrapper[4996]: E0228 09:17:55.690986 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" podUID="5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.136878 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.145648 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c8adb771-c22d-4f69-90a5-61cd4a36b618-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-x6bwt\" (UID: \"c8adb771-c22d-4f69-90a5-61cd4a36b618\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.173415 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.209265 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" event={"ID":"996ef81c-b994-461d-a9e0-ec61f8fe65f3","Type":"ContainerStarted","Data":"a1e0b6c6301a8b10770e99a855c3ae652de4845ef8e29e016819831eb459334d"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.210318 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.211799 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" event={"ID":"bd32451f-7a7d-429f-906f-d98e355c1abf","Type":"ContainerStarted","Data":"83c8aa2a26905d7a5ff30e36aa65c6a6f22251f9552bf31ce6a1deb5e1597a17"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.211960 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.227877 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" event={"ID":"a4e35f97-45f5-457f-bc93-86536fcbee68","Type":"ContainerStarted","Data":"644a5a706c13df8900f917b18918e2898de3f3711024497b0cf4424a1581bf36"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.228696 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.236922 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" podStartSLOduration=2.6888966439999997 podStartE2EDuration="16.236903748s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.141217325 +0000 UTC m=+1025.832020136" lastFinishedPulling="2026-02-28 09:17:55.689224429 +0000 UTC m=+1039.380027240" observedRunningTime="2026-02-28 09:17:56.234658932 +0000 UTC m=+1039.925461753" watchObservedRunningTime="2026-02-28 09:17:56.236903748 +0000 UTC m=+1039.927706559" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.251878 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" event={"ID":"93f353d8-bbaa-4ec1-b816-d23d58c05ee1","Type":"ContainerStarted","Data":"f23a8ecc12f35d0eb6b3c3fa777d35ce1c03124beccb411abb7346f938af0521"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.252791 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.263235 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" podStartSLOduration=2.782775335 podStartE2EDuration="16.263220798s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.238282465 +0000 UTC m=+1025.929085276" lastFinishedPulling="2026-02-28 09:17:55.718727928 +0000 UTC m=+1039.409530739" observedRunningTime="2026-02-28 09:17:56.260560022 +0000 UTC m=+1039.951362853" watchObservedRunningTime="2026-02-28 09:17:56.263220798 +0000 UTC m=+1039.954023599" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.270841 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" event={"ID":"325efd0b-ff17-4ea1-a1d5-c12576259ce5","Type":"ContainerStarted","Data":"b0ad6d4dcf23260dbdfbb1eeeee4535426b563b089d3c8b969754fe50d87a69a"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.271078 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.277716 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" event={"ID":"b503546b-54b6-4133-8d44-6a162ef54232","Type":"ContainerStarted","Data":"099d549e5642cd526384710833f877d91aab5468d2bc818e7b672583350fa2fe"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.278454 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.281899 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" podStartSLOduration=2.751908721 podStartE2EDuration="16.281883989s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.164965442 +0000 UTC m=+1025.855768253" lastFinishedPulling="2026-02-28 09:17:55.69494071 +0000 UTC m=+1039.385743521" observedRunningTime="2026-02-28 09:17:56.276998258 +0000 UTC m=+1039.967801149" watchObservedRunningTime="2026-02-28 09:17:56.281883989 +0000 UTC m=+1039.972686810" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.283810 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" event={"ID":"9d5331d1-e5df-4b2c-8663-5fe6afc00995","Type":"ContainerStarted","Data":"c06f814e11ffb9c359142427ccd27954c900a3a7433bc3a38137000084d61485"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.284599 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.306393 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" event={"ID":"9b610086-19c9-4e01-8c4e-dcf6660d749e","Type":"ContainerStarted","Data":"56b1967bbacf3973991425ca7942bbe8aa38fb4e1b7492beac243931f9aadca5"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.307169 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.307611 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" podStartSLOduration=2.852256672 podStartE2EDuration="16.307593925s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.241269708 +0000 UTC m=+1025.932072509" lastFinishedPulling="2026-02-28 09:17:55.696606961 +0000 UTC m=+1039.387409762" observedRunningTime="2026-02-28 09:17:56.30739366 +0000 UTC m=+1039.998196471" watchObservedRunningTime="2026-02-28 09:17:56.307593925 +0000 UTC m=+1039.998396726" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.309336 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" event={"ID":"84adbefa-8503-41bf-8b9b-662b08251cff","Type":"ContainerStarted","Data":"b8e4a9497fe629b94ba051ca0a3b2710fe93c4f7e1fb40f4474757cfc0134973"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.309981 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.321138 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" event={"ID":"9369ade1-1b2d-45cf-b376-1963d785be5c","Type":"ContainerStarted","Data":"f7b048ce4db8d21dbfc44d95e83e2e95a371442679e6400c8624b1fecea09542"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.322867 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.335036 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" podStartSLOduration=2.80526803 podStartE2EDuration="16.334977852s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.159953088 +0000 UTC m=+1025.850755899" lastFinishedPulling="2026-02-28 09:17:55.68966291 +0000 UTC m=+1039.380465721" observedRunningTime="2026-02-28 09:17:56.325053407 +0000 UTC m=+1040.015856218" watchObservedRunningTime="2026-02-28 09:17:56.334977852 +0000 UTC m=+1040.025780683" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.354621 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" event={"ID":"cb2d53e3-ca80-4c1c-8d0b-02caeb753792","Type":"ContainerStarted","Data":"15d34bd383211b331a77db2d96a1b26e2e994e01cde8fe3b6ae0d81d0dd3cce3"} Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.354666 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.358126 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" podStartSLOduration=2.825149382 podStartE2EDuration="16.358105283s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.165442354 +0000 UTC m=+1025.856245165" lastFinishedPulling="2026-02-28 09:17:55.698398255 +0000 UTC m=+1039.389201066" observedRunningTime="2026-02-28 09:17:56.354895364 +0000 UTC m=+1040.045698175" watchObservedRunningTime="2026-02-28 09:17:56.358105283 +0000 UTC m=+1040.048908094" Feb 28 09:17:56 crc kubenswrapper[4996]: E0228 09:17:56.358270 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" podUID="5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.401602 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" podStartSLOduration=2.521873824 podStartE2EDuration="16.401585988s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:41.820402534 +0000 UTC m=+1025.511205345" lastFinishedPulling="2026-02-28 09:17:55.700114698 +0000 UTC m=+1039.390917509" observedRunningTime="2026-02-28 09:17:56.40126708 +0000 UTC m=+1040.092069901" watchObservedRunningTime="2026-02-28 09:17:56.401585988 +0000 UTC m=+1040.092388799" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.436275 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" podStartSLOduration=2.931182443 podStartE2EDuration="16.436249825s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.190862932 +0000 UTC m=+1025.881665743" lastFinishedPulling="2026-02-28 09:17:55.695930314 +0000 UTC m=+1039.386733125" observedRunningTime="2026-02-28 09:17:56.427035178 +0000 UTC m=+1040.117837989" watchObservedRunningTime="2026-02-28 09:17:56.436249825 +0000 UTC m=+1040.127052636" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.452695 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.472718 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2de35814-cd78-4178-8b32-1fbd89de94b4-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4\" (UID: \"2de35814-cd78-4178-8b32-1fbd89de94b4\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.489338 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" podStartSLOduration=3.027257788 podStartE2EDuration="16.489321177s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.238196383 +0000 UTC m=+1025.928999194" lastFinishedPulling="2026-02-28 09:17:55.700259772 +0000 UTC m=+1039.391062583" observedRunningTime="2026-02-28 09:17:56.481079303 +0000 UTC m=+1040.171882114" watchObservedRunningTime="2026-02-28 09:17:56.489321177 +0000 UTC m=+1040.180123988" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.543454 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" podStartSLOduration=3.169495693 podStartE2EDuration="16.543436855s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.325210733 +0000 UTC m=+1026.016013544" lastFinishedPulling="2026-02-28 09:17:55.699151895 +0000 UTC m=+1039.389954706" observedRunningTime="2026-02-28 09:17:56.512344577 +0000 UTC m=+1040.203147388" watchObservedRunningTime="2026-02-28 09:17:56.543436855 +0000 UTC m=+1040.234239666" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.608086 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" podStartSLOduration=3.052984674 podStartE2EDuration="16.608059502s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.142190309 +0000 UTC m=+1025.832993120" lastFinishedPulling="2026-02-28 09:17:55.697265137 +0000 UTC m=+1039.388067948" observedRunningTime="2026-02-28 09:17:56.598361113 +0000 UTC m=+1040.289163924" watchObservedRunningTime="2026-02-28 09:17:56.608059502 +0000 UTC m=+1040.298862323" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.717534 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt"] Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.757665 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.779650 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:56 crc kubenswrapper[4996]: I0228 09:17:56.779705 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:17:56 crc kubenswrapper[4996]: E0228 09:17:56.779854 4996 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 09:17:56 crc kubenswrapper[4996]: E0228 09:17:56.779900 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:18:12.77988676 +0000 UTC m=+1056.470689571 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "webhook-server-cert" not found Feb 28 09:17:56 crc kubenswrapper[4996]: E0228 09:17:56.780214 4996 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 09:17:56 crc kubenswrapper[4996]: E0228 09:17:56.780244 4996 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs podName:143a07a9-b2e4-4b4b-9328-a3feee140c26 nodeName:}" failed. No retries permitted until 2026-02-28 09:18:12.780237439 +0000 UTC m=+1056.471040240 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs") pod "openstack-operator-controller-manager-65cbf4f977-dh2cm" (UID: "143a07a9-b2e4-4b4b-9328-a3feee140c26") : secret "metrics-server-cert" not found Feb 28 09:17:57 crc kubenswrapper[4996]: I0228 09:17:57.238060 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4"] Feb 28 09:17:57 crc kubenswrapper[4996]: W0228 09:17:57.252801 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2de35814_cd78_4178_8b32_1fbd89de94b4.slice/crio-0b0ce3fad75d181bfb9c98bcacac461e91103d278db1045adc3c428d17cccced WatchSource:0}: Error finding container 0b0ce3fad75d181bfb9c98bcacac461e91103d278db1045adc3c428d17cccced: Status 404 returned error can't find the container with id 0b0ce3fad75d181bfb9c98bcacac461e91103d278db1045adc3c428d17cccced Feb 28 09:17:57 crc kubenswrapper[4996]: I0228 09:17:57.363844 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" event={"ID":"2de35814-cd78-4178-8b32-1fbd89de94b4","Type":"ContainerStarted","Data":"0b0ce3fad75d181bfb9c98bcacac461e91103d278db1045adc3c428d17cccced"} Feb 28 09:17:57 crc kubenswrapper[4996]: I0228 09:17:57.366313 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" event={"ID":"c8adb771-c22d-4f69-90a5-61cd4a36b618","Type":"ContainerStarted","Data":"5471253da66c7fbd9bfb12c18e16a9466b2c258505c2173c587e9adeb4f45ead"} Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.131388 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537838-n88jq"] Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.133425 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537838-n88jq" Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.142489 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.142610 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.142709 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.152964 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537838-n88jq"] Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.253074 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs6dr\" (UniqueName: \"kubernetes.io/projected/23adee06-6959-459e-9756-94f5f491682c-kube-api-access-vs6dr\") pod \"auto-csr-approver-29537838-n88jq\" (UID: \"23adee06-6959-459e-9756-94f5f491682c\") " pod="openshift-infra/auto-csr-approver-29537838-n88jq" Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.354914 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs6dr\" (UniqueName: \"kubernetes.io/projected/23adee06-6959-459e-9756-94f5f491682c-kube-api-access-vs6dr\") pod \"auto-csr-approver-29537838-n88jq\" (UID: \"23adee06-6959-459e-9756-94f5f491682c\") " pod="openshift-infra/auto-csr-approver-29537838-n88jq" Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.379366 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs6dr\" (UniqueName: \"kubernetes.io/projected/23adee06-6959-459e-9756-94f5f491682c-kube-api-access-vs6dr\") pod \"auto-csr-approver-29537838-n88jq\" (UID: \"23adee06-6959-459e-9756-94f5f491682c\") " pod="openshift-infra/auto-csr-approver-29537838-n88jq" Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.463068 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537838-n88jq" Feb 28 09:18:00 crc kubenswrapper[4996]: I0228 09:18:00.471198 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-bk2ml" Feb 28 09:18:07 crc kubenswrapper[4996]: I0228 09:18:07.469376 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:18:07 crc kubenswrapper[4996]: I0228 09:18:07.485831 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" podStartSLOduration=21.063688725 podStartE2EDuration="27.4858105s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:56.731115204 +0000 UTC m=+1040.421918015" lastFinishedPulling="2026-02-28 09:18:03.153236979 +0000 UTC m=+1046.844039790" observedRunningTime="2026-02-28 09:18:07.484682282 +0000 UTC m=+1051.175485103" watchObservedRunningTime="2026-02-28 09:18:07.4858105 +0000 UTC m=+1051.176613311" Feb 28 09:18:07 crc kubenswrapper[4996]: I0228 09:18:07.605244 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537838-n88jq"] Feb 28 09:18:07 crc kubenswrapper[4996]: W0228 09:18:07.608883 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23adee06_6959_459e_9756_94f5f491682c.slice/crio-ed4b5cdd17852bc72af52602daa974a0ca122099e2ad13691a14ad6c9c4bf2e6 WatchSource:0}: Error finding container ed4b5cdd17852bc72af52602daa974a0ca122099e2ad13691a14ad6c9c4bf2e6: Status 404 returned error can't find the container with id ed4b5cdd17852bc72af52602daa974a0ca122099e2ad13691a14ad6c9c4bf2e6 Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.475896 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537838-n88jq" event={"ID":"23adee06-6959-459e-9756-94f5f491682c","Type":"ContainerStarted","Data":"ed4b5cdd17852bc72af52602daa974a0ca122099e2ad13691a14ad6c9c4bf2e6"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.477094 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" event={"ID":"b1a21e4c-eb15-4914-9366-45a0bc6f2e3d","Type":"ContainerStarted","Data":"a388d40adbe2b64beac023b78311ae7244bcb69499c571adea473c78a4219d87"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.477317 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.479993 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" event={"ID":"8a886fa9-0abd-4197-9a18-09f20f403ef4","Type":"ContainerStarted","Data":"83d24de468009cb00609da8bcd0cb6d8da248c6ad384f01d58e6196c7e9bfa20"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.480246 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.482424 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" event={"ID":"2de35814-cd78-4178-8b32-1fbd89de94b4","Type":"ContainerStarted","Data":"e8111a86e316532908f6804b0bcd90e855bbe272e67f80ce359981c417642a8d"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.482476 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.483636 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" event={"ID":"e4770c19-1759-4f93-88ea-696d28d6b149","Type":"ContainerStarted","Data":"6484cddefa77ffa33dd558cf1f9159da489db08cf8293e38a8da6ec5f9329f13"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.483790 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.486155 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" event={"ID":"de3f6975-8417-4db2-9d04-5364f4127334","Type":"ContainerStarted","Data":"e9d0b5cdc204c75470ecaf1e28a479ecfb35cac5d618897ec0af4dd0c57c56df"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.486345 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.492641 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" podStartSLOduration=3.607765258 podStartE2EDuration="28.492626729s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.275117675 +0000 UTC m=+1025.965920486" lastFinishedPulling="2026-02-28 09:18:07.159979146 +0000 UTC m=+1050.850781957" observedRunningTime="2026-02-28 09:18:08.488049416 +0000 UTC m=+1052.178852227" watchObservedRunningTime="2026-02-28 09:18:08.492626729 +0000 UTC m=+1052.183429540" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.495514 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" event={"ID":"c8adb771-c22d-4f69-90a5-61cd4a36b618","Type":"ContainerStarted","Data":"2bfc61bbc5c0de15c21f5613f3988c3531e9cf51f86cdf9f76a7594f30476c91"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.517047 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" event={"ID":"3a0ccc77-1ced-4c14-a1ac-18523be0afd4","Type":"ContainerStarted","Data":"188a9695acb0915a634ff2a10f4a3bd93f4cf8f1b2bd9a396a2ed61a92a165c1"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.522212 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.546566 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" podStartSLOduration=18.663799501 podStartE2EDuration="28.546546232s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:57.263463134 +0000 UTC m=+1040.954265945" lastFinishedPulling="2026-02-28 09:18:07.146209855 +0000 UTC m=+1050.837012676" observedRunningTime="2026-02-28 09:18:08.513015493 +0000 UTC m=+1052.203818304" watchObservedRunningTime="2026-02-28 09:18:08.546546232 +0000 UTC m=+1052.237349043" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.561322 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" event={"ID":"36f7fdcf-d295-4ee0-9155-fbd3dc0d1234","Type":"ContainerStarted","Data":"8d9c50fc8f0a9b604a3ca7c59e02fb86bf97c52bb52d9f55d73bec08777698ea"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.563855 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" event={"ID":"07d97cb5-6c6a-4d30-9454-8c13b5fc9adc","Type":"ContainerStarted","Data":"3b7e419ef238fa87d60a688d670dd0dbc52858ad78fc68cd2ec77c9fb19f932a"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.564333 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.564868 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" event={"ID":"2d7f8619-4576-4fb4-83e1-73ebe232a06d","Type":"ContainerStarted","Data":"a2b787bd4de965fb8f4f9e13b53269a6ab398ec2e66ddb6a7c7475a2d649617c"} Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.565194 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.566894 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" podStartSLOduration=7.692033751 podStartE2EDuration="28.566882764s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.278391886 +0000 UTC m=+1025.969194697" lastFinishedPulling="2026-02-28 09:18:03.153240899 +0000 UTC m=+1046.844043710" observedRunningTime="2026-02-28 09:18:08.558181899 +0000 UTC m=+1052.248984710" watchObservedRunningTime="2026-02-28 09:18:08.566882764 +0000 UTC m=+1052.257685575" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.572929 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" podStartSLOduration=3.68596668 podStartE2EDuration="28.572914913s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.271244559 +0000 UTC m=+1025.962047370" lastFinishedPulling="2026-02-28 09:18:07.158192792 +0000 UTC m=+1050.848995603" observedRunningTime="2026-02-28 09:18:08.57280406 +0000 UTC m=+1052.263606871" watchObservedRunningTime="2026-02-28 09:18:08.572914913 +0000 UTC m=+1052.263717734" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.592471 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" podStartSLOduration=3.7134904410000003 podStartE2EDuration="28.592450166s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.279463852 +0000 UTC m=+1025.970266653" lastFinishedPulling="2026-02-28 09:18:07.158423567 +0000 UTC m=+1050.849226378" observedRunningTime="2026-02-28 09:18:08.58733685 +0000 UTC m=+1052.278139661" watchObservedRunningTime="2026-02-28 09:18:08.592450166 +0000 UTC m=+1052.283252977" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.607632 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" podStartSLOduration=3.752195249 podStartE2EDuration="28.607598291s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.241709709 +0000 UTC m=+1025.932512510" lastFinishedPulling="2026-02-28 09:18:07.097112731 +0000 UTC m=+1050.787915552" observedRunningTime="2026-02-28 09:18:08.605759806 +0000 UTC m=+1052.296562617" watchObservedRunningTime="2026-02-28 09:18:08.607598291 +0000 UTC m=+1052.298401092" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.630579 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" podStartSLOduration=3.714914417 podStartE2EDuration="28.630562919s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.241545135 +0000 UTC m=+1025.932347946" lastFinishedPulling="2026-02-28 09:18:07.157193637 +0000 UTC m=+1050.847996448" observedRunningTime="2026-02-28 09:18:08.627798111 +0000 UTC m=+1052.318600922" watchObservedRunningTime="2026-02-28 09:18:08.630562919 +0000 UTC m=+1052.321365720" Feb 28 09:18:08 crc kubenswrapper[4996]: I0228 09:18:08.654983 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" podStartSLOduration=4.19120019 podStartE2EDuration="28.654966262s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.278403476 +0000 UTC m=+1025.969206287" lastFinishedPulling="2026-02-28 09:18:06.742169548 +0000 UTC m=+1050.432972359" observedRunningTime="2026-02-28 09:18:08.64722913 +0000 UTC m=+1052.338031941" watchObservedRunningTime="2026-02-28 09:18:08.654966262 +0000 UTC m=+1052.345769073" Feb 28 09:18:09 crc kubenswrapper[4996]: I0228 09:18:09.063132 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-tn67k" podStartSLOduration=4.160382329 podStartE2EDuration="29.063112591s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.274952441 +0000 UTC m=+1025.965755242" lastFinishedPulling="2026-02-28 09:18:07.177682693 +0000 UTC m=+1050.868485504" observedRunningTime="2026-02-28 09:18:08.671292486 +0000 UTC m=+1052.362095297" watchObservedRunningTime="2026-02-28 09:18:09.063112591 +0000 UTC m=+1052.753915402" Feb 28 09:18:09 crc kubenswrapper[4996]: I0228 09:18:09.572196 4996 generic.go:334] "Generic (PLEG): container finished" podID="23adee06-6959-459e-9756-94f5f491682c" containerID="054e942c70d30c1f905549216637a4e5fdcaeff9563c073c28b07d53aa2fbad7" exitCode=0 Feb 28 09:18:09 crc kubenswrapper[4996]: I0228 09:18:09.572263 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537838-n88jq" event={"ID":"23adee06-6959-459e-9756-94f5f491682c","Type":"ContainerDied","Data":"054e942c70d30c1f905549216637a4e5fdcaeff9563c073c28b07d53aa2fbad7"} Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.418616 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-pppmx" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.459589 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-qs4sw" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.499609 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-4zmmg" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.515441 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-2bv7j" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.534828 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-7cdhw" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.583477 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" event={"ID":"5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3","Type":"ContainerStarted","Data":"f54cb58185b016d660fc7bc030ca062ab8b142c665ad6aa7b2ab51c323e21336"} Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.654058 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-pp79l" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.714913 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-kx6ht" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.731627 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-d5qml" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.814788 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-tdl5w" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.960935 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-c9v26" Feb 28 09:18:10 crc kubenswrapper[4996]: I0228 09:18:10.970269 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537838-n88jq" Feb 28 09:18:11 crc kubenswrapper[4996]: I0228 09:18:11.149951 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs6dr\" (UniqueName: \"kubernetes.io/projected/23adee06-6959-459e-9756-94f5f491682c-kube-api-access-vs6dr\") pod \"23adee06-6959-459e-9756-94f5f491682c\" (UID: \"23adee06-6959-459e-9756-94f5f491682c\") " Feb 28 09:18:11 crc kubenswrapper[4996]: I0228 09:18:11.157764 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23adee06-6959-459e-9756-94f5f491682c-kube-api-access-vs6dr" (OuterVolumeSpecName: "kube-api-access-vs6dr") pod "23adee06-6959-459e-9756-94f5f491682c" (UID: "23adee06-6959-459e-9756-94f5f491682c"). InnerVolumeSpecName "kube-api-access-vs6dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:18:11 crc kubenswrapper[4996]: I0228 09:18:11.251953 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs6dr\" (UniqueName: \"kubernetes.io/projected/23adee06-6959-459e-9756-94f5f491682c-kube-api-access-vs6dr\") on node \"crc\" DevicePath \"\"" Feb 28 09:18:11 crc kubenswrapper[4996]: I0228 09:18:11.591031 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537838-n88jq" Feb 28 09:18:11 crc kubenswrapper[4996]: I0228 09:18:11.591033 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537838-n88jq" event={"ID":"23adee06-6959-459e-9756-94f5f491682c","Type":"ContainerDied","Data":"ed4b5cdd17852bc72af52602daa974a0ca122099e2ad13691a14ad6c9c4bf2e6"} Feb 28 09:18:11 crc kubenswrapper[4996]: I0228 09:18:11.591186 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed4b5cdd17852bc72af52602daa974a0ca122099e2ad13691a14ad6c9c4bf2e6" Feb 28 09:18:11 crc kubenswrapper[4996]: I0228 09:18:11.591263 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" Feb 28 09:18:11 crc kubenswrapper[4996]: I0228 09:18:11.614190 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" podStartSLOduration=4.24540875 podStartE2EDuration="31.614169452s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="2026-02-28 09:17:42.116190207 +0000 UTC m=+1025.806993058" lastFinishedPulling="2026-02-28 09:18:09.484950949 +0000 UTC m=+1053.175753760" observedRunningTime="2026-02-28 09:18:11.613604449 +0000 UTC m=+1055.304407260" watchObservedRunningTime="2026-02-28 09:18:11.614169452 +0000 UTC m=+1055.304972293" Feb 28 09:18:11 crc kubenswrapper[4996]: E0228 09:18:11.753123 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23adee06_6959_459e_9756_94f5f491682c.slice\": RecentStats: unable to find data in memory cache]" Feb 28 09:18:12 crc kubenswrapper[4996]: I0228 09:18:12.047118 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537832-gmxhr"] Feb 28 09:18:12 crc kubenswrapper[4996]: I0228 09:18:12.060577 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537832-gmxhr"] Feb 28 09:18:12 crc kubenswrapper[4996]: I0228 09:18:12.872575 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:18:12 crc kubenswrapper[4996]: I0228 09:18:12.872625 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:18:12 crc kubenswrapper[4996]: I0228 09:18:12.877766 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-metrics-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:18:12 crc kubenswrapper[4996]: I0228 09:18:12.878138 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/143a07a9-b2e4-4b4b-9328-a3feee140c26-webhook-certs\") pod \"openstack-operator-controller-manager-65cbf4f977-dh2cm\" (UID: \"143a07a9-b2e4-4b4b-9328-a3feee140c26\") " pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:18:12 crc kubenswrapper[4996]: I0228 09:18:12.974241 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:18:13 crc kubenswrapper[4996]: I0228 09:18:13.047090 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5deff085-2423-4d2c-aac3-9e3ce4247b77" path="/var/lib/kubelet/pods/5deff085-2423-4d2c-aac3-9e3ce4247b77/volumes" Feb 28 09:18:13 crc kubenswrapper[4996]: I0228 09:18:13.461602 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm"] Feb 28 09:18:13 crc kubenswrapper[4996]: I0228 09:18:13.604306 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" event={"ID":"143a07a9-b2e4-4b4b-9328-a3feee140c26","Type":"ContainerStarted","Data":"b338c52894449fa06970756ded45b19f633fd9a0356af3decd546663f8762c11"} Feb 28 09:18:16 crc kubenswrapper[4996]: I0228 09:18:16.180405 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-x6bwt" Feb 28 09:18:16 crc kubenswrapper[4996]: I0228 09:18:16.768669 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4" Feb 28 09:18:19 crc kubenswrapper[4996]: I0228 09:18:19.660144 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" event={"ID":"143a07a9-b2e4-4b4b-9328-a3feee140c26","Type":"ContainerStarted","Data":"5d5a70cbd304eadc720220cdb53da1d443de233e6837ff73077e613c6cc14e56"} Feb 28 09:18:19 crc kubenswrapper[4996]: I0228 09:18:19.661517 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:18:19 crc kubenswrapper[4996]: I0228 09:18:19.698441 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" podStartSLOduration=39.698420333 podStartE2EDuration="39.698420333s" podCreationTimestamp="2026-02-28 09:17:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:18:19.693213305 +0000 UTC m=+1063.384016116" watchObservedRunningTime="2026-02-28 09:18:19.698420333 +0000 UTC m=+1063.389223164" Feb 28 09:18:20 crc kubenswrapper[4996]: I0228 09:18:20.604456 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-clblk" Feb 28 09:18:20 crc kubenswrapper[4996]: I0228 09:18:20.623207 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-6kcm4" Feb 28 09:18:20 crc kubenswrapper[4996]: I0228 09:18:20.699638 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-zmcjg" Feb 28 09:18:20 crc kubenswrapper[4996]: I0228 09:18:20.845495 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-wwzjf" Feb 28 09:18:20 crc kubenswrapper[4996]: I0228 09:18:20.931220 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-q59cv" Feb 28 09:18:21 crc kubenswrapper[4996]: I0228 09:18:21.011450 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-f88sn" Feb 28 09:18:21 crc kubenswrapper[4996]: I0228 09:18:21.050874 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-655d95ddc7-xxt4d" Feb 28 09:18:21 crc kubenswrapper[4996]: I0228 09:18:21.058612 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-qzzwv" Feb 28 09:18:32 crc kubenswrapper[4996]: I0228 09:18:32.985743 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65cbf4f977-dh2cm" Feb 28 09:18:39 crc kubenswrapper[4996]: I0228 09:18:39.480176 4996 scope.go:117] "RemoveContainer" containerID="2a704e235144c7136a09dc9ab820ed9f441017f15606456eef364755dedc81b0" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.120268 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvjkb"] Feb 28 09:18:56 crc kubenswrapper[4996]: E0228 09:18:56.122477 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23adee06-6959-459e-9756-94f5f491682c" containerName="oc" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.122501 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="23adee06-6959-459e-9756-94f5f491682c" containerName="oc" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.122679 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="23adee06-6959-459e-9756-94f5f491682c" containerName="oc" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.123598 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.131182 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.131691 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.132103 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvjkb"] Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.133468 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-j9x7h" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.133644 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.171929 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f76m"] Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.173518 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.176167 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.197225 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f76m"] Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.200307 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54aa8c8-3e81-449f-ac67-6de88e86643f-config\") pod \"dnsmasq-dns-675f4bcbfc-dvjkb\" (UID: \"d54aa8c8-3e81-449f-ac67-6de88e86643f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.200548 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/d54aa8c8-3e81-449f-ac67-6de88e86643f-kube-api-access-8n8b2\") pod \"dnsmasq-dns-675f4bcbfc-dvjkb\" (UID: \"d54aa8c8-3e81-449f-ac67-6de88e86643f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.301766 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.301893 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbww7\" (UniqueName: \"kubernetes.io/projected/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-kube-api-access-fbww7\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.301943 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-config\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.301967 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/d54aa8c8-3e81-449f-ac67-6de88e86643f-kube-api-access-8n8b2\") pod \"dnsmasq-dns-675f4bcbfc-dvjkb\" (UID: \"d54aa8c8-3e81-449f-ac67-6de88e86643f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.302057 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54aa8c8-3e81-449f-ac67-6de88e86643f-config\") pod \"dnsmasq-dns-675f4bcbfc-dvjkb\" (UID: \"d54aa8c8-3e81-449f-ac67-6de88e86643f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.302969 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54aa8c8-3e81-449f-ac67-6de88e86643f-config\") pod \"dnsmasq-dns-675f4bcbfc-dvjkb\" (UID: \"d54aa8c8-3e81-449f-ac67-6de88e86643f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.321902 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/d54aa8c8-3e81-449f-ac67-6de88e86643f-kube-api-access-8n8b2\") pod \"dnsmasq-dns-675f4bcbfc-dvjkb\" (UID: \"d54aa8c8-3e81-449f-ac67-6de88e86643f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.403142 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbww7\" (UniqueName: \"kubernetes.io/projected/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-kube-api-access-fbww7\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.403191 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-config\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.403280 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.404040 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.404868 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-config\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.428893 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbww7\" (UniqueName: \"kubernetes.io/projected/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-kube-api-access-fbww7\") pod \"dnsmasq-dns-78dd6ddcc-2f76m\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.443685 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.500102 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.724021 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f76m"] Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.867459 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvjkb"] Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.962397 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" event={"ID":"d54aa8c8-3e81-449f-ac67-6de88e86643f","Type":"ContainerStarted","Data":"3fd531c24d0c06abc391d2698f930530c30186163cb7687361f6e95561f14180"} Feb 28 09:18:56 crc kubenswrapper[4996]: I0228 09:18:56.963798 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" event={"ID":"1db8f17a-fafb-4b52-9e56-dea0e2baa90f","Type":"ContainerStarted","Data":"777d3d682b864a29e302687c4665dbeff3239878dc55e5651d24b64c5abf3b2b"} Feb 28 09:18:58 crc kubenswrapper[4996]: I0228 09:18:58.796223 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvjkb"] Feb 28 09:18:58 crc kubenswrapper[4996]: I0228 09:18:58.824989 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qrfcq"] Feb 28 09:18:58 crc kubenswrapper[4996]: I0228 09:18:58.826157 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:58 crc kubenswrapper[4996]: I0228 09:18:58.831252 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qrfcq"] Feb 28 09:18:58 crc kubenswrapper[4996]: I0228 09:18:58.940807 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:58 crc kubenswrapper[4996]: I0228 09:18:58.940868 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-config\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:58 crc kubenswrapper[4996]: I0228 09:18:58.940927 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzz7v\" (UniqueName: \"kubernetes.io/projected/dd7293e9-0617-4241-b0fb-fe3ac621adbf-kube-api-access-bzz7v\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.041991 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-config\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.042928 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzz7v\" (UniqueName: \"kubernetes.io/projected/dd7293e9-0617-4241-b0fb-fe3ac621adbf-kube-api-access-bzz7v\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.043273 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.042929 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-config\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.048738 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.068799 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzz7v\" (UniqueName: \"kubernetes.io/projected/dd7293e9-0617-4241-b0fb-fe3ac621adbf-kube-api-access-bzz7v\") pod \"dnsmasq-dns-5ccc8479f9-qrfcq\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.122076 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f76m"] Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.140171 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs5zk"] Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.155430 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.167941 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.173569 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs5zk"] Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.269749 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-config\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.270052 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrz8h\" (UniqueName: \"kubernetes.io/projected/07ce6626-c519-4ba1-91c4-46878b6eeaa2-kube-api-access-rrz8h\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.270091 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.372827 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.372926 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-config\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.373025 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrz8h\" (UniqueName: \"kubernetes.io/projected/07ce6626-c519-4ba1-91c4-46878b6eeaa2-kube-api-access-rrz8h\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.374188 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-config\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.374261 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.399584 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrz8h\" (UniqueName: \"kubernetes.io/projected/07ce6626-c519-4ba1-91c4-46878b6eeaa2-kube-api-access-rrz8h\") pod \"dnsmasq-dns-57d769cc4f-gs5zk\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.475628 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:18:59 crc kubenswrapper[4996]: I0228 09:18:59.692023 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qrfcq"] Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.022674 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.024252 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.029443 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.032258 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.032434 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-27dtj" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.032552 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.032970 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.033398 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.034211 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.036922 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185182 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185227 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d394b420-eb09-49f3-a92c-32cbed3f63eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185250 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185267 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185288 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d394b420-eb09-49f3-a92c-32cbed3f63eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185311 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhf65\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-kube-api-access-qhf65\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185342 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185412 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185445 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185476 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.185500 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.253412 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.254646 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.259787 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.259811 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.259866 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.260121 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hvgx4" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.259983 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.259982 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.260458 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.273186 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.292848 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.292940 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.292970 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d394b420-eb09-49f3-a92c-32cbed3f63eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.292995 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.293047 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.293074 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d394b420-eb09-49f3-a92c-32cbed3f63eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.293097 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhf65\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-kube-api-access-qhf65\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.293118 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.293141 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.293182 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.293220 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.293769 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.299650 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.300741 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.301121 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.301361 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.302887 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.303794 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.305611 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d394b420-eb09-49f3-a92c-32cbed3f63eb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.318182 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d394b420-eb09-49f3-a92c-32cbed3f63eb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.319939 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.332781 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhf65\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-kube-api-access-qhf65\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.350700 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.356584 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394164 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394227 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394248 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394268 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394285 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7dfcffc8-039f-459c-9f97-d8d595506234-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394425 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7dfcffc8-039f-459c-9f97-d8d595506234-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394474 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srl78\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-kube-api-access-srl78\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394503 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394571 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394616 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-config-data\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.394653 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.495980 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496056 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496088 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496107 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496126 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7dfcffc8-039f-459c-9f97-d8d595506234-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496162 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7dfcffc8-039f-459c-9f97-d8d595506234-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496185 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srl78\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-kube-api-access-srl78\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496208 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496248 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496271 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-config-data\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.496295 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.497062 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.497363 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.498282 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.498369 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.500520 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.501144 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-config-data\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.502711 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.503188 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.505057 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7dfcffc8-039f-459c-9f97-d8d595506234-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.507489 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7dfcffc8-039f-459c-9f97-d8d595506234-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.517826 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srl78\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-kube-api-access-srl78\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.519936 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " pod="openstack/rabbitmq-server-0" Feb 28 09:19:00 crc kubenswrapper[4996]: I0228 09:19:00.584557 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.398663 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.399988 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.410732 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.411114 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.411941 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-tm9cx" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.412053 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.418403 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.428144 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.513889 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.513931 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.513950 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62pqw\" (UniqueName: \"kubernetes.io/projected/cce18b01-6974-43c9-86e2-564a4024564b-kube-api-access-62pqw\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.513981 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cce18b01-6974-43c9-86e2-564a4024564b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.514000 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-kolla-config\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.514031 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-config-data-default\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.514157 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce18b01-6974-43c9-86e2-564a4024564b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.514407 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce18b01-6974-43c9-86e2-564a4024564b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616357 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616409 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616439 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62pqw\" (UniqueName: \"kubernetes.io/projected/cce18b01-6974-43c9-86e2-564a4024564b-kube-api-access-62pqw\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616493 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cce18b01-6974-43c9-86e2-564a4024564b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616518 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-kolla-config\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616549 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-config-data-default\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616581 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce18b01-6974-43c9-86e2-564a4024564b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616654 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce18b01-6974-43c9-86e2-564a4024564b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.616816 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.617725 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-kolla-config\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.618222 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cce18b01-6974-43c9-86e2-564a4024564b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.618871 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-config-data-default\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.620187 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce18b01-6974-43c9-86e2-564a4024564b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.624319 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cce18b01-6974-43c9-86e2-564a4024564b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.635282 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cce18b01-6974-43c9-86e2-564a4024564b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.643211 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62pqw\" (UniqueName: \"kubernetes.io/projected/cce18b01-6974-43c9-86e2-564a4024564b-kube-api-access-62pqw\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.648550 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"cce18b01-6974-43c9-86e2-564a4024564b\") " pod="openstack/openstack-galera-0" Feb 28 09:19:01 crc kubenswrapper[4996]: I0228 09:19:01.735233 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 28 09:19:02 crc kubenswrapper[4996]: W0228 09:19:02.709562 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd7293e9_0617_4241_b0fb_fe3ac621adbf.slice/crio-bc427296d6cc76aa57f270a9f90c9a0be64205fc6b1b526fe4b4eaca446755fd WatchSource:0}: Error finding container bc427296d6cc76aa57f270a9f90c9a0be64205fc6b1b526fe4b4eaca446755fd: Status 404 returned error can't find the container with id bc427296d6cc76aa57f270a9f90c9a0be64205fc6b1b526fe4b4eaca446755fd Feb 28 09:19:02 crc kubenswrapper[4996]: I0228 09:19:02.892210 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 09:19:02 crc kubenswrapper[4996]: I0228 09:19:02.893610 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:02 crc kubenswrapper[4996]: I0228 09:19:02.896270 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 28 09:19:02 crc kubenswrapper[4996]: I0228 09:19:02.896704 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ths98" Feb 28 09:19:02 crc kubenswrapper[4996]: I0228 09:19:02.899167 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 28 09:19:02 crc kubenswrapper[4996]: I0228 09:19:02.899395 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 28 09:19:02 crc kubenswrapper[4996]: I0228 09:19:02.905464 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.016556 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" event={"ID":"dd7293e9-0617-4241-b0fb-fe3ac621adbf","Type":"ContainerStarted","Data":"bc427296d6cc76aa57f270a9f90c9a0be64205fc6b1b526fe4b4eaca446755fd"} Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.045812 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwjnt\" (UniqueName: \"kubernetes.io/projected/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-kube-api-access-vwjnt\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.045850 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.045879 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.045896 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.045915 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.045947 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.045969 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.045993 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147207 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwjnt\" (UniqueName: \"kubernetes.io/projected/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-kube-api-access-vwjnt\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147260 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147302 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147321 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147346 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147394 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147436 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147460 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.147892 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.148141 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.148606 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.148760 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.149410 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.153143 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.153388 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.165139 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.175631 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwjnt\" (UniqueName: \"kubernetes.io/projected/25c48fac-9425-4af6-aa7d-6b2c2428ef2d-kube-api-access-vwjnt\") pod \"openstack-cell1-galera-0\" (UID: \"25c48fac-9425-4af6-aa7d-6b2c2428ef2d\") " pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.226670 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.270868 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.271762 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.274049 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.274266 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-lkdhk" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.274411 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.283393 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.350588 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f427004-3205-42d2-86db-84131a0d2ab7-config-data\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.350638 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f427004-3205-42d2-86db-84131a0d2ab7-kolla-config\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.350661 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f427004-3205-42d2-86db-84131a0d2ab7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.350696 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f427004-3205-42d2-86db-84131a0d2ab7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.350885 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-688fv\" (UniqueName: \"kubernetes.io/projected/3f427004-3205-42d2-86db-84131a0d2ab7-kube-api-access-688fv\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.452255 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-688fv\" (UniqueName: \"kubernetes.io/projected/3f427004-3205-42d2-86db-84131a0d2ab7-kube-api-access-688fv\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.452342 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f427004-3205-42d2-86db-84131a0d2ab7-config-data\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.452395 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f427004-3205-42d2-86db-84131a0d2ab7-kolla-config\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.452423 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f427004-3205-42d2-86db-84131a0d2ab7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.452477 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f427004-3205-42d2-86db-84131a0d2ab7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.453177 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3f427004-3205-42d2-86db-84131a0d2ab7-config-data\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.453634 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3f427004-3205-42d2-86db-84131a0d2ab7-kolla-config\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.455504 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f427004-3205-42d2-86db-84131a0d2ab7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.456567 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f427004-3205-42d2-86db-84131a0d2ab7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.469582 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-688fv\" (UniqueName: \"kubernetes.io/projected/3f427004-3205-42d2-86db-84131a0d2ab7-kube-api-access-688fv\") pod \"memcached-0\" (UID: \"3f427004-3205-42d2-86db-84131a0d2ab7\") " pod="openstack/memcached-0" Feb 28 09:19:03 crc kubenswrapper[4996]: I0228 09:19:03.599784 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 28 09:19:05 crc kubenswrapper[4996]: I0228 09:19:05.435492 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:19:05 crc kubenswrapper[4996]: I0228 09:19:05.437225 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 09:19:05 crc kubenswrapper[4996]: I0228 09:19:05.441592 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-c4kbc" Feb 28 09:19:05 crc kubenswrapper[4996]: I0228 09:19:05.450360 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:19:05 crc kubenswrapper[4996]: I0228 09:19:05.604961 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr7fs\" (UniqueName: \"kubernetes.io/projected/b0396f01-c84a-4562-a8e3-6f166d52d629-kube-api-access-kr7fs\") pod \"kube-state-metrics-0\" (UID: \"b0396f01-c84a-4562-a8e3-6f166d52d629\") " pod="openstack/kube-state-metrics-0" Feb 28 09:19:05 crc kubenswrapper[4996]: I0228 09:19:05.706168 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr7fs\" (UniqueName: \"kubernetes.io/projected/b0396f01-c84a-4562-a8e3-6f166d52d629-kube-api-access-kr7fs\") pod \"kube-state-metrics-0\" (UID: \"b0396f01-c84a-4562-a8e3-6f166d52d629\") " pod="openstack/kube-state-metrics-0" Feb 28 09:19:05 crc kubenswrapper[4996]: I0228 09:19:05.725086 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr7fs\" (UniqueName: \"kubernetes.io/projected/b0396f01-c84a-4562-a8e3-6f166d52d629-kube-api-access-kr7fs\") pod \"kube-state-metrics-0\" (UID: \"b0396f01-c84a-4562-a8e3-6f166d52d629\") " pod="openstack/kube-state-metrics-0" Feb 28 09:19:05 crc kubenswrapper[4996]: I0228 09:19:05.772883 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.624276 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6rm4w"] Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.625483 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.630163 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.630163 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.630586 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-bpz98" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.641939 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-7lm47"] Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.643607 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.652444 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6rm4w"] Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.690894 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7lm47"] Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754286 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-ovn-controller-tls-certs\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754356 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-etc-ovs\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754378 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkqt\" (UniqueName: \"kubernetes.io/projected/664813b7-20c4-40e4-b4a8-9beacfb177fa-kube-api-access-tjkqt\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754468 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-lib\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754512 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-log-ovn\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754546 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-log\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754579 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-run\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754609 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-combined-ca-bundle\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754646 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-run\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754669 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2d9t\" (UniqueName: \"kubernetes.io/projected/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-kube-api-access-h2d9t\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754793 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-run-ovn\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754878 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-scripts\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.754962 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/664813b7-20c4-40e4-b4a8-9beacfb177fa-scripts\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856100 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-lib\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856152 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-log-ovn\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856186 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-log\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856220 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-run\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856237 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-combined-ca-bundle\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856257 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-run\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856277 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2d9t\" (UniqueName: \"kubernetes.io/projected/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-kube-api-access-h2d9t\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856298 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-run-ovn\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856324 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-scripts\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856355 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/664813b7-20c4-40e4-b4a8-9beacfb177fa-scripts\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856382 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-ovn-controller-tls-certs\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856401 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-etc-ovs\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856415 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjkqt\" (UniqueName: \"kubernetes.io/projected/664813b7-20c4-40e4-b4a8-9beacfb177fa-kube-api-access-tjkqt\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856750 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-log-ovn\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856802 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-run\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856836 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-lib\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856893 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-run\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.856900 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-var-log\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.857084 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-var-run-ovn\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.857164 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/664813b7-20c4-40e4-b4a8-9beacfb177fa-etc-ovs\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.861143 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-ovn-controller-tls-certs\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.861301 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-scripts\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.861590 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-combined-ca-bundle\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.862142 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/664813b7-20c4-40e4-b4a8-9beacfb177fa-scripts\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.875288 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2d9t\" (UniqueName: \"kubernetes.io/projected/ab34e1ca-2f20-4604-85fa-ca92e0a1ce68-kube-api-access-h2d9t\") pod \"ovn-controller-6rm4w\" (UID: \"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68\") " pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.875906 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjkqt\" (UniqueName: \"kubernetes.io/projected/664813b7-20c4-40e4-b4a8-9beacfb177fa-kube-api-access-tjkqt\") pod \"ovn-controller-ovs-7lm47\" (UID: \"664813b7-20c4-40e4-b4a8-9beacfb177fa\") " pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.970935 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:08 crc kubenswrapper[4996]: I0228 09:19:08.987159 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.093778 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.099032 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.102700 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.102747 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.103090 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-74flv" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.103118 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.103143 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.114299 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.263567 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b250a70-da80-4cf5-842b-3a4897a4cbc8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.264132 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.264155 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.264185 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7xc\" (UniqueName: \"kubernetes.io/projected/0b250a70-da80-4cf5-842b-3a4897a4cbc8-kube-api-access-4k7xc\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.264203 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b250a70-da80-4cf5-842b-3a4897a4cbc8-config\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.264224 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b250a70-da80-4cf5-842b-3a4897a4cbc8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.264382 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.264451 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.365552 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.365594 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.365627 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4k7xc\" (UniqueName: \"kubernetes.io/projected/0b250a70-da80-4cf5-842b-3a4897a4cbc8-kube-api-access-4k7xc\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.365654 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b250a70-da80-4cf5-842b-3a4897a4cbc8-config\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.365681 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b250a70-da80-4cf5-842b-3a4897a4cbc8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.365706 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.365727 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.365776 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b250a70-da80-4cf5-842b-3a4897a4cbc8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.366079 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.367024 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0b250a70-da80-4cf5-842b-3a4897a4cbc8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.367130 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b250a70-da80-4cf5-842b-3a4897a4cbc8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.367628 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b250a70-da80-4cf5-842b-3a4897a4cbc8-config\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.377647 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.379150 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.380793 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b250a70-da80-4cf5-842b-3a4897a4cbc8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.384972 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4k7xc\" (UniqueName: \"kubernetes.io/projected/0b250a70-da80-4cf5-842b-3a4897a4cbc8-kube-api-access-4k7xc\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.396677 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0b250a70-da80-4cf5-842b-3a4897a4cbc8\") " pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:09 crc kubenswrapper[4996]: I0228 09:19:09.418955 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:10 crc kubenswrapper[4996]: I0228 09:19:10.801086 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 28 09:19:10 crc kubenswrapper[4996]: I0228 09:19:10.825429 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:19:11 crc kubenswrapper[4996]: E0228 09:19:11.190539 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 28 09:19:11 crc kubenswrapper[4996]: E0228 09:19:11.190937 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fbww7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2f76m_openstack(1db8f17a-fafb-4b52-9e56-dea0e2baa90f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:19:11 crc kubenswrapper[4996]: E0228 09:19:11.192227 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" podUID="1db8f17a-fafb-4b52-9e56-dea0e2baa90f" Feb 28 09:19:11 crc kubenswrapper[4996]: E0228 09:19:11.232585 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 28 09:19:11 crc kubenswrapper[4996]: E0228 09:19:11.232717 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n8b2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-dvjkb_openstack(d54aa8c8-3e81-449f-ac67-6de88e86643f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:19:11 crc kubenswrapper[4996]: E0228 09:19:11.236446 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" podUID="d54aa8c8-3e81-449f-ac67-6de88e86643f" Feb 28 09:19:11 crc kubenswrapper[4996]: I0228 09:19:11.395770 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs5zk"] Feb 28 09:19:11 crc kubenswrapper[4996]: W0228 09:19:11.483940 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07ce6626_c519_4ba1_91c4_46878b6eeaa2.slice/crio-3cd270793a1deee18ff4a92a44febafb4931cb97eefc6eb81e533a133060420c WatchSource:0}: Error finding container 3cd270793a1deee18ff4a92a44febafb4931cb97eefc6eb81e533a133060420c: Status 404 returned error can't find the container with id 3cd270793a1deee18ff4a92a44febafb4931cb97eefc6eb81e533a133060420c Feb 28 09:19:11 crc kubenswrapper[4996]: I0228 09:19:11.732834 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 09:19:11 crc kubenswrapper[4996]: W0228 09:19:11.742591 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25c48fac_9425_4af6_aa7d_6b2c2428ef2d.slice/crio-581354b0a3b6dc3b2bced2fad029287eda69637e51279b3570207dfa49ec44ac WatchSource:0}: Error finding container 581354b0a3b6dc3b2bced2fad029287eda69637e51279b3570207dfa49ec44ac: Status 404 returned error can't find the container with id 581354b0a3b6dc3b2bced2fad029287eda69637e51279b3570207dfa49ec44ac Feb 28 09:19:11 crc kubenswrapper[4996]: I0228 09:19:11.747372 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:19:11 crc kubenswrapper[4996]: I0228 09:19:11.967814 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:19:11 crc kubenswrapper[4996]: I0228 09:19:11.980591 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 28 09:19:11 crc kubenswrapper[4996]: W0228 09:19:11.988691 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f427004_3205_42d2_86db_84131a0d2ab7.slice/crio-147935bf0a2a32c47f830726d8b13f768f2c06cfc7d83f7fdbfc34ec955682a8 WatchSource:0}: Error finding container 147935bf0a2a32c47f830726d8b13f768f2c06cfc7d83f7fdbfc34ec955682a8: Status 404 returned error can't find the container with id 147935bf0a2a32c47f830726d8b13f768f2c06cfc7d83f7fdbfc34ec955682a8 Feb 28 09:19:11 crc kubenswrapper[4996]: I0228 09:19:11.997783 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6rm4w"] Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.087037 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3f427004-3205-42d2-86db-84131a0d2ab7","Type":"ContainerStarted","Data":"147935bf0a2a32c47f830726d8b13f768f2c06cfc7d83f7fdbfc34ec955682a8"} Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.088232 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w" event={"ID":"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68","Type":"ContainerStarted","Data":"ffec0a9d84c84dc7419c9b8b5b565bf300f2ba18c635dec861918c045b6ea69e"} Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.089444 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d394b420-eb09-49f3-a92c-32cbed3f63eb","Type":"ContainerStarted","Data":"d18810ae55bbe7a84a060d05c9cdb88fc705bae20c89f9884c92f3f223e8b1fb"} Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.090979 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25c48fac-9425-4af6-aa7d-6b2c2428ef2d","Type":"ContainerStarted","Data":"581354b0a3b6dc3b2bced2fad029287eda69637e51279b3570207dfa49ec44ac"} Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.092657 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" event={"ID":"07ce6626-c519-4ba1-91c4-46878b6eeaa2","Type":"ContainerStarted","Data":"3cd270793a1deee18ff4a92a44febafb4931cb97eefc6eb81e533a133060420c"} Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.097156 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b0396f01-c84a-4562-a8e3-6f166d52d629","Type":"ContainerStarted","Data":"12f0d8e25091040b26ba26d162c235264cafeb28495739ad82c99ec2f4321466"} Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.098707 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cce18b01-6974-43c9-86e2-564a4024564b","Type":"ContainerStarted","Data":"5bb78dd6972718e7edfc7cb609eebf636dd26e99057f977df33b4414b38c1131"} Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.100518 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7dfcffc8-039f-459c-9f97-d8d595506234","Type":"ContainerStarted","Data":"eade5aa9446da0bf28340201432908fb89b03a3f9170ea73fa529b895ac03915"} Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.146482 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-7lm47"] Feb 28 09:19:12 crc kubenswrapper[4996]: W0228 09:19:12.155553 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod664813b7_20c4_40e4_b4a8_9beacfb177fa.slice/crio-1ed20c4987e2e172054704f149bb0598101de435e582ffa28e9c4bc07ed46394 WatchSource:0}: Error finding container 1ed20c4987e2e172054704f149bb0598101de435e582ffa28e9c4bc07ed46394: Status 404 returned error can't find the container with id 1ed20c4987e2e172054704f149bb0598101de435e582ffa28e9c4bc07ed46394 Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.192301 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5hxhx"] Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.193812 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.201297 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.215776 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edf409f-42c6-4e00-bf2e-6cd81644033a-combined-ca-bundle\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.215814 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1edf409f-42c6-4e00-bf2e-6cd81644033a-ovn-rundir\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.215833 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edf409f-42c6-4e00-bf2e-6cd81644033a-config\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.215862 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khh48\" (UniqueName: \"kubernetes.io/projected/1edf409f-42c6-4e00-bf2e-6cd81644033a-kube-api-access-khh48\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.215879 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edf409f-42c6-4e00-bf2e-6cd81644033a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.215945 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1edf409f-42c6-4e00-bf2e-6cd81644033a-ovs-rundir\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.230896 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5hxhx"] Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.317511 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edf409f-42c6-4e00-bf2e-6cd81644033a-combined-ca-bundle\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.317791 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1edf409f-42c6-4e00-bf2e-6cd81644033a-ovn-rundir\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.317812 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edf409f-42c6-4e00-bf2e-6cd81644033a-config\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.318051 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1edf409f-42c6-4e00-bf2e-6cd81644033a-ovn-rundir\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.318098 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khh48\" (UniqueName: \"kubernetes.io/projected/1edf409f-42c6-4e00-bf2e-6cd81644033a-kube-api-access-khh48\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.318117 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edf409f-42c6-4e00-bf2e-6cd81644033a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.318715 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1edf409f-42c6-4e00-bf2e-6cd81644033a-config\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.319180 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1edf409f-42c6-4e00-bf2e-6cd81644033a-ovs-rundir\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.319323 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1edf409f-42c6-4e00-bf2e-6cd81644033a-ovs-rundir\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.324829 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edf409f-42c6-4e00-bf2e-6cd81644033a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.328609 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edf409f-42c6-4e00-bf2e-6cd81644033a-combined-ca-bundle\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.340428 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khh48\" (UniqueName: \"kubernetes.io/projected/1edf409f-42c6-4e00-bf2e-6cd81644033a-kube-api-access-khh48\") pod \"ovn-controller-metrics-5hxhx\" (UID: \"1edf409f-42c6-4e00-bf2e-6cd81644033a\") " pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.485339 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.522252 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54aa8c8-3e81-449f-ac67-6de88e86643f-config\") pod \"d54aa8c8-3e81-449f-ac67-6de88e86643f\" (UID: \"d54aa8c8-3e81-449f-ac67-6de88e86643f\") " Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.522443 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/d54aa8c8-3e81-449f-ac67-6de88e86643f-kube-api-access-8n8b2\") pod \"d54aa8c8-3e81-449f-ac67-6de88e86643f\" (UID: \"d54aa8c8-3e81-449f-ac67-6de88e86643f\") " Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.524140 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d54aa8c8-3e81-449f-ac67-6de88e86643f-config" (OuterVolumeSpecName: "config") pod "d54aa8c8-3e81-449f-ac67-6de88e86643f" (UID: "d54aa8c8-3e81-449f-ac67-6de88e86643f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.524180 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5hxhx" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.526638 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d54aa8c8-3e81-449f-ac67-6de88e86643f-kube-api-access-8n8b2" (OuterVolumeSpecName: "kube-api-access-8n8b2") pod "d54aa8c8-3e81-449f-ac67-6de88e86643f" (UID: "d54aa8c8-3e81-449f-ac67-6de88e86643f"). InnerVolumeSpecName "kube-api-access-8n8b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.532098 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.624513 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-config\") pod \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.624711 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbww7\" (UniqueName: \"kubernetes.io/projected/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-kube-api-access-fbww7\") pod \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.624786 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-dns-svc\") pod \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\" (UID: \"1db8f17a-fafb-4b52-9e56-dea0e2baa90f\") " Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.625228 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d54aa8c8-3e81-449f-ac67-6de88e86643f-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.625248 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n8b2\" (UniqueName: \"kubernetes.io/projected/d54aa8c8-3e81-449f-ac67-6de88e86643f-kube-api-access-8n8b2\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.625668 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1db8f17a-fafb-4b52-9e56-dea0e2baa90f" (UID: "1db8f17a-fafb-4b52-9e56-dea0e2baa90f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.626063 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-config" (OuterVolumeSpecName: "config") pod "1db8f17a-fafb-4b52-9e56-dea0e2baa90f" (UID: "1db8f17a-fafb-4b52-9e56-dea0e2baa90f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.634306 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-kube-api-access-fbww7" (OuterVolumeSpecName: "kube-api-access-fbww7") pod "1db8f17a-fafb-4b52-9e56-dea0e2baa90f" (UID: "1db8f17a-fafb-4b52-9e56-dea0e2baa90f"). InnerVolumeSpecName "kube-api-access-fbww7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.726332 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbww7\" (UniqueName: \"kubernetes.io/projected/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-kube-api-access-fbww7\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.726607 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:12 crc kubenswrapper[4996]: I0228 09:19:12.726619 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1db8f17a-fafb-4b52-9e56-dea0e2baa90f-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.072283 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5hxhx"] Feb 28 09:19:13 crc kubenswrapper[4996]: W0228 09:19:13.103504 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b250a70_da80_4cf5_842b_3a4897a4cbc8.slice/crio-aceb08a1182d26d3d1c03cbbf4089bae05471eb54515d18682e511bad856b4e3 WatchSource:0}: Error finding container aceb08a1182d26d3d1c03cbbf4089bae05471eb54515d18682e511bad856b4e3: Status 404 returned error can't find the container with id aceb08a1182d26d3d1c03cbbf4089bae05471eb54515d18682e511bad856b4e3 Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.105101 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.119849 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" event={"ID":"d54aa8c8-3e81-449f-ac67-6de88e86643f","Type":"ContainerDied","Data":"3fd531c24d0c06abc391d2698f930530c30186163cb7687361f6e95561f14180"} Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.120063 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dvjkb" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.122246 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lm47" event={"ID":"664813b7-20c4-40e4-b4a8-9beacfb177fa","Type":"ContainerStarted","Data":"1ed20c4987e2e172054704f149bb0598101de435e582ffa28e9c4bc07ed46394"} Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.125135 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5hxhx" event={"ID":"1edf409f-42c6-4e00-bf2e-6cd81644033a","Type":"ContainerStarted","Data":"d081e03d4b338c33d4e70add79380bfd45f44b60bc58eaec8e055523bb557267"} Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.128276 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" event={"ID":"1db8f17a-fafb-4b52-9e56-dea0e2baa90f","Type":"ContainerDied","Data":"777d3d682b864a29e302687c4665dbeff3239878dc55e5651d24b64c5abf3b2b"} Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.128450 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2f76m" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.172334 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvjkb"] Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.180404 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dvjkb"] Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.206052 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f76m"] Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.211214 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2f76m"] Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.284859 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.286205 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.290305 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.294106 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.294179 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.297126 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4r7w2" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.305200 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.338336 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.338397 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.338476 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.338523 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.338564 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.338791 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-config\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.338849 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msshb\" (UniqueName: \"kubernetes.io/projected/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-kube-api-access-msshb\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.338883 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.441642 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.441692 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.441729 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-config\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.441763 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msshb\" (UniqueName: \"kubernetes.io/projected/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-kube-api-access-msshb\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.441791 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.441828 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.441865 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.441901 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.442209 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.442668 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.443212 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-config\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.443808 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.447543 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.447611 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.452069 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.459220 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msshb\" (UniqueName: \"kubernetes.io/projected/46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e-kube-api-access-msshb\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.465347 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e\") " pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:13 crc kubenswrapper[4996]: I0228 09:19:13.650377 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:14 crc kubenswrapper[4996]: I0228 09:19:14.135472 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" event={"ID":"dd7293e9-0617-4241-b0fb-fe3ac621adbf","Type":"ContainerStarted","Data":"a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39"} Feb 28 09:19:14 crc kubenswrapper[4996]: I0228 09:19:14.136399 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0b250a70-da80-4cf5-842b-3a4897a4cbc8","Type":"ContainerStarted","Data":"aceb08a1182d26d3d1c03cbbf4089bae05471eb54515d18682e511bad856b4e3"} Feb 28 09:19:14 crc kubenswrapper[4996]: I0228 09:19:14.137854 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" event={"ID":"07ce6626-c519-4ba1-91c4-46878b6eeaa2","Type":"ContainerStarted","Data":"6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237"} Feb 28 09:19:14 crc kubenswrapper[4996]: I0228 09:19:14.227286 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 09:19:14 crc kubenswrapper[4996]: W0228 09:19:14.244953 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46fbd9a6_fd40_4ab8_bad5_f4c0397fdb3e.slice/crio-0389075ef07b4f66656a7d1ff2eb137ceb72dc3ebc6a76a175c015b91d2cfac0 WatchSource:0}: Error finding container 0389075ef07b4f66656a7d1ff2eb137ceb72dc3ebc6a76a175c015b91d2cfac0: Status 404 returned error can't find the container with id 0389075ef07b4f66656a7d1ff2eb137ceb72dc3ebc6a76a175c015b91d2cfac0 Feb 28 09:19:15 crc kubenswrapper[4996]: I0228 09:19:15.042547 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1db8f17a-fafb-4b52-9e56-dea0e2baa90f" path="/var/lib/kubelet/pods/1db8f17a-fafb-4b52-9e56-dea0e2baa90f/volumes" Feb 28 09:19:15 crc kubenswrapper[4996]: I0228 09:19:15.042888 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d54aa8c8-3e81-449f-ac67-6de88e86643f" path="/var/lib/kubelet/pods/d54aa8c8-3e81-449f-ac67-6de88e86643f/volumes" Feb 28 09:19:15 crc kubenswrapper[4996]: I0228 09:19:15.146297 4996 generic.go:334] "Generic (PLEG): container finished" podID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" containerID="a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39" exitCode=0 Feb 28 09:19:15 crc kubenswrapper[4996]: I0228 09:19:15.146380 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" event={"ID":"dd7293e9-0617-4241-b0fb-fe3ac621adbf","Type":"ContainerDied","Data":"a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39"} Feb 28 09:19:15 crc kubenswrapper[4996]: I0228 09:19:15.149643 4996 generic.go:334] "Generic (PLEG): container finished" podID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" containerID="6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237" exitCode=0 Feb 28 09:19:15 crc kubenswrapper[4996]: I0228 09:19:15.149714 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" event={"ID":"07ce6626-c519-4ba1-91c4-46878b6eeaa2","Type":"ContainerDied","Data":"6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237"} Feb 28 09:19:15 crc kubenswrapper[4996]: I0228 09:19:15.151641 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e","Type":"ContainerStarted","Data":"0389075ef07b4f66656a7d1ff2eb137ceb72dc3ebc6a76a175c015b91d2cfac0"} Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.214238 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" event={"ID":"07ce6626-c519-4ba1-91c4-46878b6eeaa2","Type":"ContainerStarted","Data":"522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f"} Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.214745 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.217554 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w" event={"ID":"ab34e1ca-2f20-4604-85fa-ca92e0a1ce68","Type":"ContainerStarted","Data":"91d9036bb641fd65d0eac4e871f828aeafe458290c3544d0689356cba87da30f"} Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.217907 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-6rm4w" Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.222056 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" event={"ID":"dd7293e9-0617-4241-b0fb-fe3ac621adbf","Type":"ContainerStarted","Data":"3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e"} Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.222447 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.235691 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" podStartSLOduration=25.235672709 podStartE2EDuration="25.235672709s" podCreationTimestamp="2026-02-28 09:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:19:24.232340268 +0000 UTC m=+1127.923143079" watchObservedRunningTime="2026-02-28 09:19:24.235672709 +0000 UTC m=+1127.926475520" Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.258376 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6rm4w" podStartSLOduration=5.226294706 podStartE2EDuration="16.258358935s" podCreationTimestamp="2026-02-28 09:19:08 +0000 UTC" firstStartedPulling="2026-02-28 09:19:12.002711693 +0000 UTC m=+1115.693514504" lastFinishedPulling="2026-02-28 09:19:23.034775922 +0000 UTC m=+1126.725578733" observedRunningTime="2026-02-28 09:19:24.251456509 +0000 UTC m=+1127.942259330" watchObservedRunningTime="2026-02-28 09:19:24.258358935 +0000 UTC m=+1127.949161746" Feb 28 09:19:24 crc kubenswrapper[4996]: I0228 09:19:24.276523 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" podStartSLOduration=17.690211176 podStartE2EDuration="26.276506503s" podCreationTimestamp="2026-02-28 09:18:58 +0000 UTC" firstStartedPulling="2026-02-28 09:19:02.715320891 +0000 UTC m=+1106.406123702" lastFinishedPulling="2026-02-28 09:19:11.301616218 +0000 UTC m=+1114.992419029" observedRunningTime="2026-02-28 09:19:24.273184353 +0000 UTC m=+1127.963987164" watchObservedRunningTime="2026-02-28 09:19:24.276506503 +0000 UTC m=+1127.967309314" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.232570 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d394b420-eb09-49f3-a92c-32cbed3f63eb","Type":"ContainerStarted","Data":"a25211526baa7403188021f3fc538e87de9e8c663c7b688919dd5d11965d108c"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.234693 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0b250a70-da80-4cf5-842b-3a4897a4cbc8","Type":"ContainerStarted","Data":"c7a47a26670466bfb0906462abfc151ae0c6b04b931519ff1403d38d0be308e3"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.234745 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0b250a70-da80-4cf5-842b-3a4897a4cbc8","Type":"ContainerStarted","Data":"1c7fe91f2f2757c0fdd5feb54e5cfe7fac58ae8239988377dd7212d04b30768b"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.236841 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lm47" event={"ID":"664813b7-20c4-40e4-b4a8-9beacfb177fa","Type":"ContainerStarted","Data":"7a5cd7e5deb030fac07cd2f336ef09f8d3b8cdadf66b7ee33bd196011707422c"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.238965 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b0396f01-c84a-4562-a8e3-6f166d52d629","Type":"ContainerStarted","Data":"d5f6f46a7f6d9003eebe9c427d77bdba0fc4b0d09937fd196d48a71dbe7eb71e"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.239203 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.240377 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5hxhx" event={"ID":"1edf409f-42c6-4e00-bf2e-6cd81644033a","Type":"ContainerStarted","Data":"7823e916414e4447bebd1a17aa833d1de1947160f36f664a0f847cc9799321ff"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.242584 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7dfcffc8-039f-459c-9f97-d8d595506234","Type":"ContainerStarted","Data":"3ed538687163ff88d97f08dd4b725bdd040099b40e9ac9d1bdbc3e2e3d8f19f4"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.244615 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e","Type":"ContainerStarted","Data":"7fcb12d971d4d41f1bf59f739225e27eb8d61f455165fd131f0e28430371d258"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.244638 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e","Type":"ContainerStarted","Data":"e4be6e953e59ec6fd6b47742956ce18934368700a8ec392c73cce09832dcc0a9"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.246273 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cce18b01-6974-43c9-86e2-564a4024564b","Type":"ContainerStarted","Data":"3bf51e95f124bca75b482c1d8b34dfd28771a5f01a6497d618f42ea26c085e77"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.248692 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3f427004-3205-42d2-86db-84131a0d2ab7","Type":"ContainerStarted","Data":"04fba5a93cf2cb21eb7f084b90676fcb6fd51bbf7af4e8e4011984c785512ec7"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.249184 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.251940 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25c48fac-9425-4af6-aa7d-6b2c2428ef2d","Type":"ContainerStarted","Data":"f0be925260b4f5639d4515a1b0d3acc89749390809d4484bc6b74a4e6baa510c"} Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.273612 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.342295353 podStartE2EDuration="20.273593414s" podCreationTimestamp="2026-02-28 09:19:05 +0000 UTC" firstStartedPulling="2026-02-28 09:19:11.984440593 +0000 UTC m=+1115.675243404" lastFinishedPulling="2026-02-28 09:19:23.915738614 +0000 UTC m=+1127.606541465" observedRunningTime="2026-02-28 09:19:25.273262527 +0000 UTC m=+1128.964065368" watchObservedRunningTime="2026-02-28 09:19:25.273593414 +0000 UTC m=+1128.964396225" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.331476 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.707266434 podStartE2EDuration="22.331455759s" podCreationTimestamp="2026-02-28 09:19:03 +0000 UTC" firstStartedPulling="2026-02-28 09:19:11.99428167 +0000 UTC m=+1115.685084481" lastFinishedPulling="2026-02-28 09:19:22.618470995 +0000 UTC m=+1126.309273806" observedRunningTime="2026-02-28 09:19:25.330500937 +0000 UTC m=+1129.021303758" watchObservedRunningTime="2026-02-28 09:19:25.331455759 +0000 UTC m=+1129.022258590" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.353106 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5hxhx" podStartSLOduration=3.246785523 podStartE2EDuration="13.353082231s" podCreationTimestamp="2026-02-28 09:19:12 +0000 UTC" firstStartedPulling="2026-02-28 09:19:13.072406845 +0000 UTC m=+1116.763209676" lastFinishedPulling="2026-02-28 09:19:23.178703573 +0000 UTC m=+1126.869506384" observedRunningTime="2026-02-28 09:19:25.347344643 +0000 UTC m=+1129.038147464" watchObservedRunningTime="2026-02-28 09:19:25.353082231 +0000 UTC m=+1129.043885052" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.422791 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=7.496964656 podStartE2EDuration="17.422769342s" podCreationTimestamp="2026-02-28 09:19:08 +0000 UTC" firstStartedPulling="2026-02-28 09:19:13.108994317 +0000 UTC m=+1116.799797128" lastFinishedPulling="2026-02-28 09:19:23.034799003 +0000 UTC m=+1126.725601814" observedRunningTime="2026-02-28 09:19:25.413951298 +0000 UTC m=+1129.104754119" watchObservedRunningTime="2026-02-28 09:19:25.422769342 +0000 UTC m=+1129.113572173" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.447818 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.736817449 podStartE2EDuration="13.447779954s" podCreationTimestamp="2026-02-28 09:19:12 +0000 UTC" firstStartedPulling="2026-02-28 09:19:14.246597756 +0000 UTC m=+1117.937400567" lastFinishedPulling="2026-02-28 09:19:22.957560261 +0000 UTC m=+1126.648363072" observedRunningTime="2026-02-28 09:19:25.439498935 +0000 UTC m=+1129.130301746" watchObservedRunningTime="2026-02-28 09:19:25.447779954 +0000 UTC m=+1129.138582765" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.552637 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qrfcq"] Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.584261 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h9p66"] Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.586338 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.588441 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.610464 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h9p66"] Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.646314 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtstb\" (UniqueName: \"kubernetes.io/projected/8862f16a-c447-4311-887a-3621b4721439-kube-api-access-mtstb\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.646382 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.646481 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-config\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.646517 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.650755 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.691207 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs5zk"] Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.726871 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fgmfq"] Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.728473 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.730856 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.734617 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fgmfq"] Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.748069 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-config\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.748123 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.748166 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtstb\" (UniqueName: \"kubernetes.io/projected/8862f16a-c447-4311-887a-3621b4721439-kube-api-access-mtstb\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.748198 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.749226 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.749235 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-config\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.749410 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.775753 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtstb\" (UniqueName: \"kubernetes.io/projected/8862f16a-c447-4311-887a-3621b4721439-kube-api-access-mtstb\") pod \"dnsmasq-dns-7fd796d7df-h9p66\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.849420 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.849479 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.849529 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-config\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.849604 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4l9\" (UniqueName: \"kubernetes.io/projected/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-kube-api-access-xj4l9\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.849658 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.911353 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.951350 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj4l9\" (UniqueName: \"kubernetes.io/projected/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-kube-api-access-xj4l9\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.951604 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.951664 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.951684 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.951722 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-config\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.952482 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-config\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.953236 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.953703 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.954201 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:25 crc kubenswrapper[4996]: I0228 09:19:25.967714 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj4l9\" (UniqueName: \"kubernetes.io/projected/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-kube-api-access-xj4l9\") pod \"dnsmasq-dns-86db49b7ff-fgmfq\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.044626 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.259496 4996 generic.go:334] "Generic (PLEG): container finished" podID="664813b7-20c4-40e4-b4a8-9beacfb177fa" containerID="7a5cd7e5deb030fac07cd2f336ef09f8d3b8cdadf66b7ee33bd196011707422c" exitCode=0 Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.259700 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lm47" event={"ID":"664813b7-20c4-40e4-b4a8-9beacfb177fa","Type":"ContainerDied","Data":"7a5cd7e5deb030fac07cd2f336ef09f8d3b8cdadf66b7ee33bd196011707422c"} Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.261367 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" podUID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" containerName="dnsmasq-dns" containerID="cri-o://522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f" gracePeriod=10 Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.390856 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h9p66"] Feb 28 09:19:26 crc kubenswrapper[4996]: W0228 09:19:26.395907 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8862f16a_c447_4311_887a_3621b4721439.slice/crio-05a36204edead93c34ea755192e92e310e2ba7291de27f2293022b4cede8b2ae WatchSource:0}: Error finding container 05a36204edead93c34ea755192e92e310e2ba7291de27f2293022b4cede8b2ae: Status 404 returned error can't find the container with id 05a36204edead93c34ea755192e92e310e2ba7291de27f2293022b4cede8b2ae Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.496602 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fgmfq"] Feb 28 09:19:26 crc kubenswrapper[4996]: W0228 09:19:26.512832 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2328d30_7bd5_4fc4_b7d7_cad17b8d3eff.slice/crio-45bc67656c0b0b60414247d8ae3298433e9975ddb8e4a182b564c57cdf9c41e2 WatchSource:0}: Error finding container 45bc67656c0b0b60414247d8ae3298433e9975ddb8e4a182b564c57cdf9c41e2: Status 404 returned error can't find the container with id 45bc67656c0b0b60414247d8ae3298433e9975ddb8e4a182b564c57cdf9c41e2 Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.672944 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.762109 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-dns-svc\") pod \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.762199 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrz8h\" (UniqueName: \"kubernetes.io/projected/07ce6626-c519-4ba1-91c4-46878b6eeaa2-kube-api-access-rrz8h\") pod \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.762302 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-config\") pod \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\" (UID: \"07ce6626-c519-4ba1-91c4-46878b6eeaa2\") " Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.766824 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ce6626-c519-4ba1-91c4-46878b6eeaa2-kube-api-access-rrz8h" (OuterVolumeSpecName: "kube-api-access-rrz8h") pod "07ce6626-c519-4ba1-91c4-46878b6eeaa2" (UID: "07ce6626-c519-4ba1-91c4-46878b6eeaa2"). InnerVolumeSpecName "kube-api-access-rrz8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.807818 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "07ce6626-c519-4ba1-91c4-46878b6eeaa2" (UID: "07ce6626-c519-4ba1-91c4-46878b6eeaa2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.814884 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-config" (OuterVolumeSpecName: "config") pod "07ce6626-c519-4ba1-91c4-46878b6eeaa2" (UID: "07ce6626-c519-4ba1-91c4-46878b6eeaa2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.864047 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.864354 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrz8h\" (UniqueName: \"kubernetes.io/projected/07ce6626-c519-4ba1-91c4-46878b6eeaa2-kube-api-access-rrz8h\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:26 crc kubenswrapper[4996]: I0228 09:19:26.864367 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07ce6626-c519-4ba1-91c4-46878b6eeaa2-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.267621 4996 generic.go:334] "Generic (PLEG): container finished" podID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" containerID="b18dd1b43f7a6974dc8fcf3cb44679f9c6afac1412238ddad71ba5897d1a1dc3" exitCode=0 Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.268137 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" event={"ID":"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff","Type":"ContainerDied","Data":"b18dd1b43f7a6974dc8fcf3cb44679f9c6afac1412238ddad71ba5897d1a1dc3"} Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.268185 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" event={"ID":"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff","Type":"ContainerStarted","Data":"45bc67656c0b0b60414247d8ae3298433e9975ddb8e4a182b564c57cdf9c41e2"} Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.270893 4996 generic.go:334] "Generic (PLEG): container finished" podID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" containerID="522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f" exitCode=0 Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.271042 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.270938 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" event={"ID":"07ce6626-c519-4ba1-91c4-46878b6eeaa2","Type":"ContainerDied","Data":"522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f"} Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.271224 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-gs5zk" event={"ID":"07ce6626-c519-4ba1-91c4-46878b6eeaa2","Type":"ContainerDied","Data":"3cd270793a1deee18ff4a92a44febafb4931cb97eefc6eb81e533a133060420c"} Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.271265 4996 scope.go:117] "RemoveContainer" containerID="522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.275064 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lm47" event={"ID":"664813b7-20c4-40e4-b4a8-9beacfb177fa","Type":"ContainerStarted","Data":"23298816ac6c4c6b32a6c995810daf91f0f6e82b68ba03983fd151c95dda2dc8"} Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.275110 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-7lm47" event={"ID":"664813b7-20c4-40e4-b4a8-9beacfb177fa","Type":"ContainerStarted","Data":"4c66b4b0b580da8667eadeecece29d0487de7cb19036eb20ece8ff534807c818"} Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.275134 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.275209 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.277397 4996 generic.go:334] "Generic (PLEG): container finished" podID="8862f16a-c447-4311-887a-3621b4721439" containerID="a68953a8253c10d4dfd56868f7a8f863ec9df2b316c6af83705e6d549d1f90c0" exitCode=0 Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.278158 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" event={"ID":"8862f16a-c447-4311-887a-3621b4721439","Type":"ContainerDied","Data":"a68953a8253c10d4dfd56868f7a8f863ec9df2b316c6af83705e6d549d1f90c0"} Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.278201 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" event={"ID":"8862f16a-c447-4311-887a-3621b4721439","Type":"ContainerStarted","Data":"05a36204edead93c34ea755192e92e310e2ba7291de27f2293022b4cede8b2ae"} Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.278225 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" podUID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" containerName="dnsmasq-dns" containerID="cri-o://3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e" gracePeriod=10 Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.314263 4996 scope.go:117] "RemoveContainer" containerID="6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.359705 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs5zk"] Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.367593 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-gs5zk"] Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.393160 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-7lm47" podStartSLOduration=8.593379522 podStartE2EDuration="19.39314099s" podCreationTimestamp="2026-02-28 09:19:08 +0000 UTC" firstStartedPulling="2026-02-28 09:19:12.157886585 +0000 UTC m=+1115.848689396" lastFinishedPulling="2026-02-28 09:19:22.957648053 +0000 UTC m=+1126.648450864" observedRunningTime="2026-02-28 09:19:27.389483622 +0000 UTC m=+1131.080286443" watchObservedRunningTime="2026-02-28 09:19:27.39314099 +0000 UTC m=+1131.083943801" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.419363 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.464859 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.469283 4996 scope.go:117] "RemoveContainer" containerID="522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f" Feb 28 09:19:27 crc kubenswrapper[4996]: E0228 09:19:27.473129 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f\": container with ID starting with 522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f not found: ID does not exist" containerID="522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.473180 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f"} err="failed to get container status \"522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f\": rpc error: code = NotFound desc = could not find container \"522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f\": container with ID starting with 522f18a2de66d6aa27c1bbd957cf9920fccc3a16a231a95ccb28c0417139207f not found: ID does not exist" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.473208 4996 scope.go:117] "RemoveContainer" containerID="6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237" Feb 28 09:19:27 crc kubenswrapper[4996]: E0228 09:19:27.476594 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237\": container with ID starting with 6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237 not found: ID does not exist" containerID="6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.476635 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237"} err="failed to get container status \"6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237\": rpc error: code = NotFound desc = could not find container \"6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237\": container with ID starting with 6fac1969c35840cad279e1d5e57ca4d0e17a1bdb87f8659524777934db8c2237 not found: ID does not exist" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.717476 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.881177 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzz7v\" (UniqueName: \"kubernetes.io/projected/dd7293e9-0617-4241-b0fb-fe3ac621adbf-kube-api-access-bzz7v\") pod \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.881517 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-dns-svc\") pod \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.881547 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-config\") pod \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\" (UID: \"dd7293e9-0617-4241-b0fb-fe3ac621adbf\") " Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.886310 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7293e9-0617-4241-b0fb-fe3ac621adbf-kube-api-access-bzz7v" (OuterVolumeSpecName: "kube-api-access-bzz7v") pod "dd7293e9-0617-4241-b0fb-fe3ac621adbf" (UID: "dd7293e9-0617-4241-b0fb-fe3ac621adbf"). InnerVolumeSpecName "kube-api-access-bzz7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.919086 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-config" (OuterVolumeSpecName: "config") pod "dd7293e9-0617-4241-b0fb-fe3ac621adbf" (UID: "dd7293e9-0617-4241-b0fb-fe3ac621adbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.925523 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd7293e9-0617-4241-b0fb-fe3ac621adbf" (UID: "dd7293e9-0617-4241-b0fb-fe3ac621adbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.982722 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzz7v\" (UniqueName: \"kubernetes.io/projected/dd7293e9-0617-4241-b0fb-fe3ac621adbf-kube-api-access-bzz7v\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.982751 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:27 crc kubenswrapper[4996]: I0228 09:19:27.982760 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd7293e9-0617-4241-b0fb-fe3ac621adbf-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.288495 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" event={"ID":"8862f16a-c447-4311-887a-3621b4721439","Type":"ContainerStarted","Data":"beb0501797a65e674d9f96fad89aee7af92d5a49c53fb902dbd7ca6072cf0388"} Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.289058 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.290388 4996 generic.go:334] "Generic (PLEG): container finished" podID="25c48fac-9425-4af6-aa7d-6b2c2428ef2d" containerID="f0be925260b4f5639d4515a1b0d3acc89749390809d4484bc6b74a4e6baa510c" exitCode=0 Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.290524 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25c48fac-9425-4af6-aa7d-6b2c2428ef2d","Type":"ContainerDied","Data":"f0be925260b4f5639d4515a1b0d3acc89749390809d4484bc6b74a4e6baa510c"} Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.293814 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" event={"ID":"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff","Type":"ContainerStarted","Data":"7dcb86fbfeb76d775fc5d98d7fc7aa47f5aa531114adf9f386cf2ad20d2e808e"} Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.293994 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.296184 4996 generic.go:334] "Generic (PLEG): container finished" podID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" containerID="3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e" exitCode=0 Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.296267 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.296306 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" event={"ID":"dd7293e9-0617-4241-b0fb-fe3ac621adbf","Type":"ContainerDied","Data":"3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e"} Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.296406 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-qrfcq" event={"ID":"dd7293e9-0617-4241-b0fb-fe3ac621adbf","Type":"ContainerDied","Data":"bc427296d6cc76aa57f270a9f90c9a0be64205fc6b1b526fe4b4eaca446755fd"} Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.296479 4996 scope.go:117] "RemoveContainer" containerID="3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.297996 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.325650 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" podStartSLOduration=3.325612363 podStartE2EDuration="3.325612363s" podCreationTimestamp="2026-02-28 09:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:19:28.309839183 +0000 UTC m=+1132.000642014" watchObservedRunningTime="2026-02-28 09:19:28.325612363 +0000 UTC m=+1132.016415244" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.363583 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" podStartSLOduration=3.363567758 podStartE2EDuration="3.363567758s" podCreationTimestamp="2026-02-28 09:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:19:28.358521556 +0000 UTC m=+1132.049324417" watchObservedRunningTime="2026-02-28 09:19:28.363567758 +0000 UTC m=+1132.054370569" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.384389 4996 scope.go:117] "RemoveContainer" containerID="a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.419282 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qrfcq"] Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.429972 4996 scope.go:117] "RemoveContainer" containerID="3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e" Feb 28 09:19:28 crc kubenswrapper[4996]: E0228 09:19:28.430501 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e\": container with ID starting with 3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e not found: ID does not exist" containerID="3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.430552 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e"} err="failed to get container status \"3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e\": rpc error: code = NotFound desc = could not find container \"3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e\": container with ID starting with 3a7bbb3fc6e2327044c6e97d54f8661992c0f7843da7dfd61a0c64affc491b9e not found: ID does not exist" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.430582 4996 scope.go:117] "RemoveContainer" containerID="a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39" Feb 28 09:19:28 crc kubenswrapper[4996]: E0228 09:19:28.430991 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39\": container with ID starting with a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39 not found: ID does not exist" containerID="a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.431076 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39"} err="failed to get container status \"a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39\": rpc error: code = NotFound desc = could not find container \"a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39\": container with ID starting with a637ebf7de08109107a627ddbd2a947cb9e65bc847ecda2688e504ccca4eaf39 not found: ID does not exist" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.433647 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-qrfcq"] Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.650942 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:28 crc kubenswrapper[4996]: I0228 09:19:28.686739 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.063214 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" path="/var/lib/kubelet/pods/07ce6626-c519-4ba1-91c4-46878b6eeaa2/volumes" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.064234 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" path="/var/lib/kubelet/pods/dd7293e9-0617-4241-b0fb-fe3ac621adbf/volumes" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.306234 4996 generic.go:334] "Generic (PLEG): container finished" podID="cce18b01-6974-43c9-86e2-564a4024564b" containerID="3bf51e95f124bca75b482c1d8b34dfd28771a5f01a6497d618f42ea26c085e77" exitCode=0 Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.306307 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cce18b01-6974-43c9-86e2-564a4024564b","Type":"ContainerDied","Data":"3bf51e95f124bca75b482c1d8b34dfd28771a5f01a6497d618f42ea26c085e77"} Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.308495 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"25c48fac-9425-4af6-aa7d-6b2c2428ef2d","Type":"ContainerStarted","Data":"34aa4d469b913c536d7e166ee54e22f513fddb5436ed14b7dadffd8e73124a5f"} Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.361929 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.533659536000002 podStartE2EDuration="28.36190732s" podCreationTimestamp="2026-02-28 09:19:01 +0000 UTC" firstStartedPulling="2026-02-28 09:19:11.745120432 +0000 UTC m=+1115.435923243" lastFinishedPulling="2026-02-28 09:19:22.573368216 +0000 UTC m=+1126.264171027" observedRunningTime="2026-02-28 09:19:29.359186624 +0000 UTC m=+1133.049989445" watchObservedRunningTime="2026-02-28 09:19:29.36190732 +0000 UTC m=+1133.052710141" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.366619 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.368033 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.687170 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 28 09:19:29 crc kubenswrapper[4996]: E0228 09:19:29.687487 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" containerName="init" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.687502 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" containerName="init" Feb 28 09:19:29 crc kubenswrapper[4996]: E0228 09:19:29.687525 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" containerName="dnsmasq-dns" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.687531 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" containerName="dnsmasq-dns" Feb 28 09:19:29 crc kubenswrapper[4996]: E0228 09:19:29.687555 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" containerName="init" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.687560 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" containerName="init" Feb 28 09:19:29 crc kubenswrapper[4996]: E0228 09:19:29.687581 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" containerName="dnsmasq-dns" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.687587 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" containerName="dnsmasq-dns" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.687726 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7293e9-0617-4241-b0fb-fe3ac621adbf" containerName="dnsmasq-dns" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.687740 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ce6626-c519-4ba1-91c4-46878b6eeaa2" containerName="dnsmasq-dns" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.688532 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.697466 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.698672 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.698681 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xkqrx" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.698699 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.712748 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.813635 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-scripts\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.813690 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.813723 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-config\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.814005 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.814092 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cmsf\" (UniqueName: \"kubernetes.io/projected/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-kube-api-access-8cmsf\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.814266 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.814498 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.916458 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.916543 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-scripts\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.916569 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.916603 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-config\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.916670 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.916699 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cmsf\" (UniqueName: \"kubernetes.io/projected/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-kube-api-access-8cmsf\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.916746 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.917822 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.918508 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-scripts\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.918613 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-config\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.922510 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.923695 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.923816 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:29 crc kubenswrapper[4996]: I0228 09:19:29.939252 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cmsf\" (UniqueName: \"kubernetes.io/projected/c38b2e2f-cb15-44c6-b4d9-1b9d80c57045-kube-api-access-8cmsf\") pod \"ovn-northd-0\" (UID: \"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045\") " pod="openstack/ovn-northd-0" Feb 28 09:19:30 crc kubenswrapper[4996]: I0228 09:19:30.017054 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 28 09:19:30 crc kubenswrapper[4996]: I0228 09:19:30.323586 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"cce18b01-6974-43c9-86e2-564a4024564b","Type":"ContainerStarted","Data":"49e57663e5ca3f8ac448703163f6ef8996765c87ffc01b08ffb73d813d7aca6c"} Feb 28 09:19:30 crc kubenswrapper[4996]: I0228 09:19:30.346151 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=18.645737347 podStartE2EDuration="30.346012278s" podCreationTimestamp="2026-02-28 09:19:00 +0000 UTC" firstStartedPulling="2026-02-28 09:19:11.25730152 +0000 UTC m=+1114.948104331" lastFinishedPulling="2026-02-28 09:19:22.957576451 +0000 UTC m=+1126.648379262" observedRunningTime="2026-02-28 09:19:30.344377479 +0000 UTC m=+1134.035180280" watchObservedRunningTime="2026-02-28 09:19:30.346012278 +0000 UTC m=+1134.036815099" Feb 28 09:19:30 crc kubenswrapper[4996]: I0228 09:19:30.450370 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 28 09:19:30 crc kubenswrapper[4996]: W0228 09:19:30.455455 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38b2e2f_cb15_44c6_b4d9_1b9d80c57045.slice/crio-d3e6a2181c3479a04ee398ea3e5b83ab7bbe1ca8d48c8ff0497cbf0bc25d0010 WatchSource:0}: Error finding container d3e6a2181c3479a04ee398ea3e5b83ab7bbe1ca8d48c8ff0497cbf0bc25d0010: Status 404 returned error can't find the container with id d3e6a2181c3479a04ee398ea3e5b83ab7bbe1ca8d48c8ff0497cbf0bc25d0010 Feb 28 09:19:31 crc kubenswrapper[4996]: I0228 09:19:31.329647 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045","Type":"ContainerStarted","Data":"d3e6a2181c3479a04ee398ea3e5b83ab7bbe1ca8d48c8ff0497cbf0bc25d0010"} Feb 28 09:19:31 crc kubenswrapper[4996]: I0228 09:19:31.741255 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 28 09:19:31 crc kubenswrapper[4996]: I0228 09:19:31.741586 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 28 09:19:32 crc kubenswrapper[4996]: I0228 09:19:32.336961 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045","Type":"ContainerStarted","Data":"c15fed8d2c728b8dc60a6a972826fca9c4d75e0e7d06462c7ad5ada288923573"} Feb 28 09:19:32 crc kubenswrapper[4996]: I0228 09:19:32.337330 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 28 09:19:32 crc kubenswrapper[4996]: I0228 09:19:32.337345 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"c38b2e2f-cb15-44c6-b4d9-1b9d80c57045","Type":"ContainerStarted","Data":"5527b8a6a85da6d890f3103b4f7ac8b2d00a14acdb10e3c3b099687f8d019915"} Feb 28 09:19:33 crc kubenswrapper[4996]: I0228 09:19:33.227276 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:33 crc kubenswrapper[4996]: I0228 09:19:33.227586 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:33 crc kubenswrapper[4996]: I0228 09:19:33.601783 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 28 09:19:33 crc kubenswrapper[4996]: I0228 09:19:33.635577 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.581626441 podStartE2EDuration="4.635543013s" podCreationTimestamp="2026-02-28 09:19:29 +0000 UTC" firstStartedPulling="2026-02-28 09:19:30.457401033 +0000 UTC m=+1134.148203834" lastFinishedPulling="2026-02-28 09:19:31.511317595 +0000 UTC m=+1135.202120406" observedRunningTime="2026-02-28 09:19:32.361473323 +0000 UTC m=+1136.052276134" watchObservedRunningTime="2026-02-28 09:19:33.635543013 +0000 UTC m=+1137.326345894" Feb 28 09:19:35 crc kubenswrapper[4996]: I0228 09:19:35.776933 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 28 09:19:35 crc kubenswrapper[4996]: I0228 09:19:35.913415 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:36 crc kubenswrapper[4996]: I0228 09:19:36.048181 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:19:36 crc kubenswrapper[4996]: I0228 09:19:36.119063 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h9p66"] Feb 28 09:19:36 crc kubenswrapper[4996]: I0228 09:19:36.367087 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" podUID="8862f16a-c447-4311-887a-3621b4721439" containerName="dnsmasq-dns" containerID="cri-o://beb0501797a65e674d9f96fad89aee7af92d5a49c53fb902dbd7ca6072cf0388" gracePeriod=10 Feb 28 09:19:37 crc kubenswrapper[4996]: I0228 09:19:37.376764 4996 generic.go:334] "Generic (PLEG): container finished" podID="8862f16a-c447-4311-887a-3621b4721439" containerID="beb0501797a65e674d9f96fad89aee7af92d5a49c53fb902dbd7ca6072cf0388" exitCode=0 Feb 28 09:19:37 crc kubenswrapper[4996]: I0228 09:19:37.376855 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" event={"ID":"8862f16a-c447-4311-887a-3621b4721439","Type":"ContainerDied","Data":"beb0501797a65e674d9f96fad89aee7af92d5a49c53fb902dbd7ca6072cf0388"} Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.149940 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.265833 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="cce18b01-6974-43c9-86e2-564a4024564b" containerName="galera" probeResult="failure" output=< Feb 28 09:19:40 crc kubenswrapper[4996]: wsrep_local_state_comment (Joined) differs from Synced Feb 28 09:19:40 crc kubenswrapper[4996]: > Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.425395 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.477734 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.527162 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.605930 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-config\") pod \"8862f16a-c447-4311-887a-3621b4721439\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.606034 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-ovsdbserver-nb\") pod \"8862f16a-c447-4311-887a-3621b4721439\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.606121 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-dns-svc\") pod \"8862f16a-c447-4311-887a-3621b4721439\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.606204 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtstb\" (UniqueName: \"kubernetes.io/projected/8862f16a-c447-4311-887a-3621b4721439-kube-api-access-mtstb\") pod \"8862f16a-c447-4311-887a-3621b4721439\" (UID: \"8862f16a-c447-4311-887a-3621b4721439\") " Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.626142 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8862f16a-c447-4311-887a-3621b4721439-kube-api-access-mtstb" (OuterVolumeSpecName: "kube-api-access-mtstb") pod "8862f16a-c447-4311-887a-3621b4721439" (UID: "8862f16a-c447-4311-887a-3621b4721439"). InnerVolumeSpecName "kube-api-access-mtstb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.648250 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8862f16a-c447-4311-887a-3621b4721439" (UID: "8862f16a-c447-4311-887a-3621b4721439"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.652620 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-config" (OuterVolumeSpecName: "config") pod "8862f16a-c447-4311-887a-3621b4721439" (UID: "8862f16a-c447-4311-887a-3621b4721439"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.661405 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8862f16a-c447-4311-887a-3621b4721439" (UID: "8862f16a-c447-4311-887a-3621b4721439"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.708359 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.708402 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.708418 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtstb\" (UniqueName: \"kubernetes.io/projected/8862f16a-c447-4311-887a-3621b4721439-kube-api-access-mtstb\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:40 crc kubenswrapper[4996]: I0228 09:19:40.708433 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8862f16a-c447-4311-887a-3621b4721439-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.412573 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" event={"ID":"8862f16a-c447-4311-887a-3621b4721439","Type":"ContainerDied","Data":"05a36204edead93c34ea755192e92e310e2ba7291de27f2293022b4cede8b2ae"} Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.412924 4996 scope.go:117] "RemoveContainer" containerID="beb0501797a65e674d9f96fad89aee7af92d5a49c53fb902dbd7ca6072cf0388" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.412582 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-h9p66" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.433958 4996 scope.go:117] "RemoveContainer" containerID="a68953a8253c10d4dfd56868f7a8f863ec9df2b316c6af83705e6d549d1f90c0" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.437731 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h9p66"] Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.443248 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-h9p66"] Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.828578 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.952668 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-5f76l"] Feb 28 09:19:41 crc kubenswrapper[4996]: E0228 09:19:41.952978 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8862f16a-c447-4311-887a-3621b4721439" containerName="init" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.952995 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8862f16a-c447-4311-887a-3621b4721439" containerName="init" Feb 28 09:19:41 crc kubenswrapper[4996]: E0228 09:19:41.953041 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8862f16a-c447-4311-887a-3621b4721439" containerName="dnsmasq-dns" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.953049 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8862f16a-c447-4311-887a-3621b4721439" containerName="dnsmasq-dns" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.953204 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="8862f16a-c447-4311-887a-3621b4721439" containerName="dnsmasq-dns" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.953688 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.955737 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 28 09:19:41 crc kubenswrapper[4996]: I0228 09:19:41.969712 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5f76l"] Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.026821 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7np7\" (UniqueName: \"kubernetes.io/projected/dcb96126-2830-41c5-9c53-777a164e5e29-kube-api-access-j7np7\") pod \"root-account-create-update-5f76l\" (UID: \"dcb96126-2830-41c5-9c53-777a164e5e29\") " pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.026879 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcb96126-2830-41c5-9c53-777a164e5e29-operator-scripts\") pod \"root-account-create-update-5f76l\" (UID: \"dcb96126-2830-41c5-9c53-777a164e5e29\") " pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.129108 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7np7\" (UniqueName: \"kubernetes.io/projected/dcb96126-2830-41c5-9c53-777a164e5e29-kube-api-access-j7np7\") pod \"root-account-create-update-5f76l\" (UID: \"dcb96126-2830-41c5-9c53-777a164e5e29\") " pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.129282 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcb96126-2830-41c5-9c53-777a164e5e29-operator-scripts\") pod \"root-account-create-update-5f76l\" (UID: \"dcb96126-2830-41c5-9c53-777a164e5e29\") " pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.131256 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcb96126-2830-41c5-9c53-777a164e5e29-operator-scripts\") pod \"root-account-create-update-5f76l\" (UID: \"dcb96126-2830-41c5-9c53-777a164e5e29\") " pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.151488 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7np7\" (UniqueName: \"kubernetes.io/projected/dcb96126-2830-41c5-9c53-777a164e5e29-kube-api-access-j7np7\") pod \"root-account-create-update-5f76l\" (UID: \"dcb96126-2830-41c5-9c53-777a164e5e29\") " pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.249231 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.249302 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.269147 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:42 crc kubenswrapper[4996]: I0228 09:19:42.731156 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-5f76l"] Feb 28 09:19:42 crc kubenswrapper[4996]: W0228 09:19:42.742316 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcb96126_2830_41c5_9c53_777a164e5e29.slice/crio-bbd762fd98f1a4def5502fb73716cdac07bfbf9353fe5843a364ad7b92604e6d WatchSource:0}: Error finding container bbd762fd98f1a4def5502fb73716cdac07bfbf9353fe5843a364ad7b92604e6d: Status 404 returned error can't find the container with id bbd762fd98f1a4def5502fb73716cdac07bfbf9353fe5843a364ad7b92604e6d Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.045902 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8862f16a-c447-4311-887a-3621b4721439" path="/var/lib/kubelet/pods/8862f16a-c447-4311-887a-3621b4721439/volumes" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.438827 4996 generic.go:334] "Generic (PLEG): container finished" podID="dcb96126-2830-41c5-9c53-777a164e5e29" containerID="66b7957b93432cd124025db8010c4f8251aa35695b9ca057cd207b0509f6fc75" exitCode=0 Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.438872 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5f76l" event={"ID":"dcb96126-2830-41c5-9c53-777a164e5e29","Type":"ContainerDied","Data":"66b7957b93432cd124025db8010c4f8251aa35695b9ca057cd207b0509f6fc75"} Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.438932 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5f76l" event={"ID":"dcb96126-2830-41c5-9c53-777a164e5e29","Type":"ContainerStarted","Data":"bbd762fd98f1a4def5502fb73716cdac07bfbf9353fe5843a364ad7b92604e6d"} Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.779477 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gtn9c"] Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.782205 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.792215 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gtn9c"] Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.860423 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de95567a-4315-4687-9b8b-1a94bba6b4c4-operator-scripts\") pod \"glance-db-create-gtn9c\" (UID: \"de95567a-4315-4687-9b8b-1a94bba6b4c4\") " pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.860530 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmvj2\" (UniqueName: \"kubernetes.io/projected/de95567a-4315-4687-9b8b-1a94bba6b4c4-kube-api-access-kmvj2\") pod \"glance-db-create-gtn9c\" (UID: \"de95567a-4315-4687-9b8b-1a94bba6b4c4\") " pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.870893 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4d7d-account-create-update-rdmzh"] Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.877101 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.882199 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.894233 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4d7d-account-create-update-rdmzh"] Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.962690 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnckl\" (UniqueName: \"kubernetes.io/projected/edeb8336-9b74-47dd-acb8-22384803c2c6-kube-api-access-wnckl\") pod \"glance-4d7d-account-create-update-rdmzh\" (UID: \"edeb8336-9b74-47dd-acb8-22384803c2c6\") " pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.963077 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de95567a-4315-4687-9b8b-1a94bba6b4c4-operator-scripts\") pod \"glance-db-create-gtn9c\" (UID: \"de95567a-4315-4687-9b8b-1a94bba6b4c4\") " pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.963322 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmvj2\" (UniqueName: \"kubernetes.io/projected/de95567a-4315-4687-9b8b-1a94bba6b4c4-kube-api-access-kmvj2\") pod \"glance-db-create-gtn9c\" (UID: \"de95567a-4315-4687-9b8b-1a94bba6b4c4\") " pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.963496 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edeb8336-9b74-47dd-acb8-22384803c2c6-operator-scripts\") pod \"glance-4d7d-account-create-update-rdmzh\" (UID: \"edeb8336-9b74-47dd-acb8-22384803c2c6\") " pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.964469 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de95567a-4315-4687-9b8b-1a94bba6b4c4-operator-scripts\") pod \"glance-db-create-gtn9c\" (UID: \"de95567a-4315-4687-9b8b-1a94bba6b4c4\") " pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:43 crc kubenswrapper[4996]: I0228 09:19:43.987214 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmvj2\" (UniqueName: \"kubernetes.io/projected/de95567a-4315-4687-9b8b-1a94bba6b4c4-kube-api-access-kmvj2\") pod \"glance-db-create-gtn9c\" (UID: \"de95567a-4315-4687-9b8b-1a94bba6b4c4\") " pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.064652 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnckl\" (UniqueName: \"kubernetes.io/projected/edeb8336-9b74-47dd-acb8-22384803c2c6-kube-api-access-wnckl\") pod \"glance-4d7d-account-create-update-rdmzh\" (UID: \"edeb8336-9b74-47dd-acb8-22384803c2c6\") " pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.064766 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edeb8336-9b74-47dd-acb8-22384803c2c6-operator-scripts\") pod \"glance-4d7d-account-create-update-rdmzh\" (UID: \"edeb8336-9b74-47dd-acb8-22384803c2c6\") " pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.065482 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edeb8336-9b74-47dd-acb8-22384803c2c6-operator-scripts\") pod \"glance-4d7d-account-create-update-rdmzh\" (UID: \"edeb8336-9b74-47dd-acb8-22384803c2c6\") " pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.080113 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnckl\" (UniqueName: \"kubernetes.io/projected/edeb8336-9b74-47dd-acb8-22384803c2c6-kube-api-access-wnckl\") pod \"glance-4d7d-account-create-update-rdmzh\" (UID: \"edeb8336-9b74-47dd-acb8-22384803c2c6\") " pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.110287 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.191596 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.534651 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-lb9rn"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.535824 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.542006 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lb9rn"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.588570 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gtn9c"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.663470 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-56c8-account-create-update-wkft2"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.664458 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.668269 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.676665 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fcb493-2d66-4e54-a21a-bb6f84f68479-operator-scripts\") pod \"keystone-db-create-lb9rn\" (UID: \"37fcb493-2d66-4e54-a21a-bb6f84f68479\") " pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.676873 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79kfl\" (UniqueName: \"kubernetes.io/projected/37fcb493-2d66-4e54-a21a-bb6f84f68479-kube-api-access-79kfl\") pod \"keystone-db-create-lb9rn\" (UID: \"37fcb493-2d66-4e54-a21a-bb6f84f68479\") " pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.684474 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56c8-account-create-update-wkft2"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.707068 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4d7d-account-create-update-rdmzh"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.777943 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kzqr\" (UniqueName: \"kubernetes.io/projected/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-kube-api-access-7kzqr\") pod \"keystone-56c8-account-create-update-wkft2\" (UID: \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\") " pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.778037 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fcb493-2d66-4e54-a21a-bb6f84f68479-operator-scripts\") pod \"keystone-db-create-lb9rn\" (UID: \"37fcb493-2d66-4e54-a21a-bb6f84f68479\") " pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.778066 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-operator-scripts\") pod \"keystone-56c8-account-create-update-wkft2\" (UID: \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\") " pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.778154 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79kfl\" (UniqueName: \"kubernetes.io/projected/37fcb493-2d66-4e54-a21a-bb6f84f68479-kube-api-access-79kfl\") pod \"keystone-db-create-lb9rn\" (UID: \"37fcb493-2d66-4e54-a21a-bb6f84f68479\") " pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.779226 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fcb493-2d66-4e54-a21a-bb6f84f68479-operator-scripts\") pod \"keystone-db-create-lb9rn\" (UID: \"37fcb493-2d66-4e54-a21a-bb6f84f68479\") " pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.785466 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-6fzk4"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.786620 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.806808 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6fzk4"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.818317 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79kfl\" (UniqueName: \"kubernetes.io/projected/37fcb493-2d66-4e54-a21a-bb6f84f68479-kube-api-access-79kfl\") pod \"keystone-db-create-lb9rn\" (UID: \"37fcb493-2d66-4e54-a21a-bb6f84f68479\") " pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.856729 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-df7d-account-create-update-ttm7k"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.857953 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.860389 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.860984 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.877208 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-df7d-account-create-update-ttm7k"] Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.889553 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-operator-scripts\") pod \"keystone-56c8-account-create-update-wkft2\" (UID: \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\") " pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.889770 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cks9p\" (UniqueName: \"kubernetes.io/projected/3a9977b0-368a-4fd5-997f-760640256681-kube-api-access-cks9p\") pod \"placement-db-create-6fzk4\" (UID: \"3a9977b0-368a-4fd5-997f-760640256681\") " pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.889953 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9977b0-368a-4fd5-997f-760640256681-operator-scripts\") pod \"placement-db-create-6fzk4\" (UID: \"3a9977b0-368a-4fd5-997f-760640256681\") " pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.889993 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kzqr\" (UniqueName: \"kubernetes.io/projected/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-kube-api-access-7kzqr\") pod \"keystone-56c8-account-create-update-wkft2\" (UID: \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\") " pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.891216 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-operator-scripts\") pod \"keystone-56c8-account-create-update-wkft2\" (UID: \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\") " pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.899047 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.910945 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kzqr\" (UniqueName: \"kubernetes.io/projected/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-kube-api-access-7kzqr\") pod \"keystone-56c8-account-create-update-wkft2\" (UID: \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\") " pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.992660 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcb96126-2830-41c5-9c53-777a164e5e29-operator-scripts\") pod \"dcb96126-2830-41c5-9c53-777a164e5e29\" (UID: \"dcb96126-2830-41c5-9c53-777a164e5e29\") " Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.992731 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7np7\" (UniqueName: \"kubernetes.io/projected/dcb96126-2830-41c5-9c53-777a164e5e29-kube-api-access-j7np7\") pod \"dcb96126-2830-41c5-9c53-777a164e5e29\" (UID: \"dcb96126-2830-41c5-9c53-777a164e5e29\") " Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.992982 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knl97\" (UniqueName: \"kubernetes.io/projected/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-kube-api-access-knl97\") pod \"placement-df7d-account-create-update-ttm7k\" (UID: \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\") " pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.993104 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cks9p\" (UniqueName: \"kubernetes.io/projected/3a9977b0-368a-4fd5-997f-760640256681-kube-api-access-cks9p\") pod \"placement-db-create-6fzk4\" (UID: \"3a9977b0-368a-4fd5-997f-760640256681\") " pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.993168 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-operator-scripts\") pod \"placement-df7d-account-create-update-ttm7k\" (UID: \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\") " pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.993210 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9977b0-368a-4fd5-997f-760640256681-operator-scripts\") pod \"placement-db-create-6fzk4\" (UID: \"3a9977b0-368a-4fd5-997f-760640256681\") " pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.993649 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcb96126-2830-41c5-9c53-777a164e5e29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dcb96126-2830-41c5-9c53-777a164e5e29" (UID: "dcb96126-2830-41c5-9c53-777a164e5e29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:44 crc kubenswrapper[4996]: I0228 09:19:44.993938 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9977b0-368a-4fd5-997f-760640256681-operator-scripts\") pod \"placement-db-create-6fzk4\" (UID: \"3a9977b0-368a-4fd5-997f-760640256681\") " pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:44.999946 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb96126-2830-41c5-9c53-777a164e5e29-kube-api-access-j7np7" (OuterVolumeSpecName: "kube-api-access-j7np7") pod "dcb96126-2830-41c5-9c53-777a164e5e29" (UID: "dcb96126-2830-41c5-9c53-777a164e5e29"). InnerVolumeSpecName "kube-api-access-j7np7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.009821 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cks9p\" (UniqueName: \"kubernetes.io/projected/3a9977b0-368a-4fd5-997f-760640256681-kube-api-access-cks9p\") pod \"placement-db-create-6fzk4\" (UID: \"3a9977b0-368a-4fd5-997f-760640256681\") " pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.026942 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.094795 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-operator-scripts\") pod \"placement-df7d-account-create-update-ttm7k\" (UID: \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\") " pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.095112 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knl97\" (UniqueName: \"kubernetes.io/projected/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-kube-api-access-knl97\") pod \"placement-df7d-account-create-update-ttm7k\" (UID: \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\") " pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.095199 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dcb96126-2830-41c5-9c53-777a164e5e29-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.095212 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7np7\" (UniqueName: \"kubernetes.io/projected/dcb96126-2830-41c5-9c53-777a164e5e29-kube-api-access-j7np7\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.095875 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-operator-scripts\") pod \"placement-df7d-account-create-update-ttm7k\" (UID: \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\") " pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.112328 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knl97\" (UniqueName: \"kubernetes.io/projected/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-kube-api-access-knl97\") pod \"placement-df7d-account-create-update-ttm7k\" (UID: \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\") " pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.178809 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.224702 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.351132 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-lb9rn"] Feb 28 09:19:45 crc kubenswrapper[4996]: W0228 09:19:45.355410 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37fcb493_2d66_4e54_a21a_bb6f84f68479.slice/crio-8d6ccebfef56f741897d9aad800e82b9ea8e36509a89d2587a2e6aca6b0d3573 WatchSource:0}: Error finding container 8d6ccebfef56f741897d9aad800e82b9ea8e36509a89d2587a2e6aca6b0d3573: Status 404 returned error can't find the container with id 8d6ccebfef56f741897d9aad800e82b9ea8e36509a89d2587a2e6aca6b0d3573 Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.458857 4996 generic.go:334] "Generic (PLEG): container finished" podID="de95567a-4315-4687-9b8b-1a94bba6b4c4" containerID="a3e90e4f95ce60d7af4a10c5dfa83e9d064eeb47a1c206fba595bf78b8946f32" exitCode=0 Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.458955 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gtn9c" event={"ID":"de95567a-4315-4687-9b8b-1a94bba6b4c4","Type":"ContainerDied","Data":"a3e90e4f95ce60d7af4a10c5dfa83e9d064eeb47a1c206fba595bf78b8946f32"} Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.458988 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gtn9c" event={"ID":"de95567a-4315-4687-9b8b-1a94bba6b4c4","Type":"ContainerStarted","Data":"a517b1b4efd22980692a96433ed5b62f50be9accaf7946045de331d82df1bd2a"} Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.465335 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-5f76l" event={"ID":"dcb96126-2830-41c5-9c53-777a164e5e29","Type":"ContainerDied","Data":"bbd762fd98f1a4def5502fb73716cdac07bfbf9353fe5843a364ad7b92604e6d"} Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.465367 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbd762fd98f1a4def5502fb73716cdac07bfbf9353fe5843a364ad7b92604e6d" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.465421 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-5f76l" Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.481112 4996 generic.go:334] "Generic (PLEG): container finished" podID="edeb8336-9b74-47dd-acb8-22384803c2c6" containerID="418b09438ba9d1f12b752434c9962998e7394ee312b84bc617736280230932da" exitCode=0 Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.481184 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4d7d-account-create-update-rdmzh" event={"ID":"edeb8336-9b74-47dd-acb8-22384803c2c6","Type":"ContainerDied","Data":"418b09438ba9d1f12b752434c9962998e7394ee312b84bc617736280230932da"} Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.481212 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4d7d-account-create-update-rdmzh" event={"ID":"edeb8336-9b74-47dd-acb8-22384803c2c6","Type":"ContainerStarted","Data":"7f1ed6a7127d50acaea3ab79bea6caa0420cafae00fe0d6023b15d73ea1b79e4"} Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.482914 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lb9rn" event={"ID":"37fcb493-2d66-4e54-a21a-bb6f84f68479","Type":"ContainerStarted","Data":"8d6ccebfef56f741897d9aad800e82b9ea8e36509a89d2587a2e6aca6b0d3573"} Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.579715 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-56c8-account-create-update-wkft2"] Feb 28 09:19:45 crc kubenswrapper[4996]: W0228 09:19:45.582559 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5fce9db_bc0e_4778_b0be_d08e1b8febcc.slice/crio-efd31e6e52634cd0d0aa51ac0bbc136116e0ef34a7ab351278b65232a7ec6520 WatchSource:0}: Error finding container efd31e6e52634cd0d0aa51ac0bbc136116e0ef34a7ab351278b65232a7ec6520: Status 404 returned error can't find the container with id efd31e6e52634cd0d0aa51ac0bbc136116e0ef34a7ab351278b65232a7ec6520 Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.719318 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-6fzk4"] Feb 28 09:19:45 crc kubenswrapper[4996]: W0228 09:19:45.719902 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a9977b0_368a_4fd5_997f_760640256681.slice/crio-4b0d9ac54ee48d03a39a5158deccb9336a3e07ca51286bfec43460f9f329dddd WatchSource:0}: Error finding container 4b0d9ac54ee48d03a39a5158deccb9336a3e07ca51286bfec43460f9f329dddd: Status 404 returned error can't find the container with id 4b0d9ac54ee48d03a39a5158deccb9336a3e07ca51286bfec43460f9f329dddd Feb 28 09:19:45 crc kubenswrapper[4996]: I0228 09:19:45.748716 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-df7d-account-create-update-ttm7k"] Feb 28 09:19:45 crc kubenswrapper[4996]: W0228 09:19:45.759061 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5a66f08_b7bc_4b5e_8e08_25d602c30e34.slice/crio-f4fb790bff37bdcf324bfa4dbb630a519838d3a0916605be41f080a5734ad8a5 WatchSource:0}: Error finding container f4fb790bff37bdcf324bfa4dbb630a519838d3a0916605be41f080a5734ad8a5: Status 404 returned error can't find the container with id f4fb790bff37bdcf324bfa4dbb630a519838d3a0916605be41f080a5734ad8a5 Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.492681 4996 generic.go:334] "Generic (PLEG): container finished" podID="c5a66f08-b7bc-4b5e-8e08-25d602c30e34" containerID="f3ae16b9c8cd14c9121bca722a2863f4cc5b4e608cf91a5452e3b8bd2d899ed9" exitCode=0 Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.492795 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-df7d-account-create-update-ttm7k" event={"ID":"c5a66f08-b7bc-4b5e-8e08-25d602c30e34","Type":"ContainerDied","Data":"f3ae16b9c8cd14c9121bca722a2863f4cc5b4e608cf91a5452e3b8bd2d899ed9"} Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.493106 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-df7d-account-create-update-ttm7k" event={"ID":"c5a66f08-b7bc-4b5e-8e08-25d602c30e34","Type":"ContainerStarted","Data":"f4fb790bff37bdcf324bfa4dbb630a519838d3a0916605be41f080a5734ad8a5"} Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.496887 4996 generic.go:334] "Generic (PLEG): container finished" podID="3a9977b0-368a-4fd5-997f-760640256681" containerID="4a321b218126663abcdde2341e1597440e4466f0ebb2710cae193694c7bd5513" exitCode=0 Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.496958 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fzk4" event={"ID":"3a9977b0-368a-4fd5-997f-760640256681","Type":"ContainerDied","Data":"4a321b218126663abcdde2341e1597440e4466f0ebb2710cae193694c7bd5513"} Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.496986 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fzk4" event={"ID":"3a9977b0-368a-4fd5-997f-760640256681","Type":"ContainerStarted","Data":"4b0d9ac54ee48d03a39a5158deccb9336a3e07ca51286bfec43460f9f329dddd"} Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.498702 4996 generic.go:334] "Generic (PLEG): container finished" podID="37fcb493-2d66-4e54-a21a-bb6f84f68479" containerID="a5743272415b1eb8d207e7122aa11baaaa5cadec6a5b1a4915f0f55b78a78996" exitCode=0 Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.498831 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lb9rn" event={"ID":"37fcb493-2d66-4e54-a21a-bb6f84f68479","Type":"ContainerDied","Data":"a5743272415b1eb8d207e7122aa11baaaa5cadec6a5b1a4915f0f55b78a78996"} Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.500077 4996 generic.go:334] "Generic (PLEG): container finished" podID="e5fce9db-bc0e-4778-b0be-d08e1b8febcc" containerID="7edda3a13769aeeeabe7e4a3f8bafec913c6500c465272b973321eea6d9358ff" exitCode=0 Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.500265 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56c8-account-create-update-wkft2" event={"ID":"e5fce9db-bc0e-4778-b0be-d08e1b8febcc","Type":"ContainerDied","Data":"7edda3a13769aeeeabe7e4a3f8bafec913c6500c465272b973321eea6d9358ff"} Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.500290 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56c8-account-create-update-wkft2" event={"ID":"e5fce9db-bc0e-4778-b0be-d08e1b8febcc","Type":"ContainerStarted","Data":"efd31e6e52634cd0d0aa51ac0bbc136116e0ef34a7ab351278b65232a7ec6520"} Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.881343 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.887703 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.921911 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edeb8336-9b74-47dd-acb8-22384803c2c6-operator-scripts\") pod \"edeb8336-9b74-47dd-acb8-22384803c2c6\" (UID: \"edeb8336-9b74-47dd-acb8-22384803c2c6\") " Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.922033 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnckl\" (UniqueName: \"kubernetes.io/projected/edeb8336-9b74-47dd-acb8-22384803c2c6-kube-api-access-wnckl\") pod \"edeb8336-9b74-47dd-acb8-22384803c2c6\" (UID: \"edeb8336-9b74-47dd-acb8-22384803c2c6\") " Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.922065 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmvj2\" (UniqueName: \"kubernetes.io/projected/de95567a-4315-4687-9b8b-1a94bba6b4c4-kube-api-access-kmvj2\") pod \"de95567a-4315-4687-9b8b-1a94bba6b4c4\" (UID: \"de95567a-4315-4687-9b8b-1a94bba6b4c4\") " Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.922089 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de95567a-4315-4687-9b8b-1a94bba6b4c4-operator-scripts\") pod \"de95567a-4315-4687-9b8b-1a94bba6b4c4\" (UID: \"de95567a-4315-4687-9b8b-1a94bba6b4c4\") " Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.922693 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edeb8336-9b74-47dd-acb8-22384803c2c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edeb8336-9b74-47dd-acb8-22384803c2c6" (UID: "edeb8336-9b74-47dd-acb8-22384803c2c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.922904 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de95567a-4315-4687-9b8b-1a94bba6b4c4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de95567a-4315-4687-9b8b-1a94bba6b4c4" (UID: "de95567a-4315-4687-9b8b-1a94bba6b4c4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.923257 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de95567a-4315-4687-9b8b-1a94bba6b4c4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.923272 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edeb8336-9b74-47dd-acb8-22384803c2c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.928770 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de95567a-4315-4687-9b8b-1a94bba6b4c4-kube-api-access-kmvj2" (OuterVolumeSpecName: "kube-api-access-kmvj2") pod "de95567a-4315-4687-9b8b-1a94bba6b4c4" (UID: "de95567a-4315-4687-9b8b-1a94bba6b4c4"). InnerVolumeSpecName "kube-api-access-kmvj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:46 crc kubenswrapper[4996]: I0228 09:19:46.928979 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edeb8336-9b74-47dd-acb8-22384803c2c6-kube-api-access-wnckl" (OuterVolumeSpecName: "kube-api-access-wnckl") pod "edeb8336-9b74-47dd-acb8-22384803c2c6" (UID: "edeb8336-9b74-47dd-acb8-22384803c2c6"). InnerVolumeSpecName "kube-api-access-wnckl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.025919 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnckl\" (UniqueName: \"kubernetes.io/projected/edeb8336-9b74-47dd-acb8-22384803c2c6-kube-api-access-wnckl\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.025961 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmvj2\" (UniqueName: \"kubernetes.io/projected/de95567a-4315-4687-9b8b-1a94bba6b4c4-kube-api-access-kmvj2\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.513292 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gtn9c" event={"ID":"de95567a-4315-4687-9b8b-1a94bba6b4c4","Type":"ContainerDied","Data":"a517b1b4efd22980692a96433ed5b62f50be9accaf7946045de331d82df1bd2a"} Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.514203 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a517b1b4efd22980692a96433ed5b62f50be9accaf7946045de331d82df1bd2a" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.513333 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gtn9c" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.516598 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4d7d-account-create-update-rdmzh" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.517130 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4d7d-account-create-update-rdmzh" event={"ID":"edeb8336-9b74-47dd-acb8-22384803c2c6","Type":"ContainerDied","Data":"7f1ed6a7127d50acaea3ab79bea6caa0420cafae00fe0d6023b15d73ea1b79e4"} Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.517162 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f1ed6a7127d50acaea3ab79bea6caa0420cafae00fe0d6023b15d73ea1b79e4" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.887508 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.939252 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knl97\" (UniqueName: \"kubernetes.io/projected/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-kube-api-access-knl97\") pod \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\" (UID: \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\") " Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.939357 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-operator-scripts\") pod \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\" (UID: \"c5a66f08-b7bc-4b5e-8e08-25d602c30e34\") " Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.941374 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5a66f08-b7bc-4b5e-8e08-25d602c30e34" (UID: "c5a66f08-b7bc-4b5e-8e08-25d602c30e34"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:47 crc kubenswrapper[4996]: I0228 09:19:47.949357 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-kube-api-access-knl97" (OuterVolumeSpecName: "kube-api-access-knl97") pod "c5a66f08-b7bc-4b5e-8e08-25d602c30e34" (UID: "c5a66f08-b7bc-4b5e-8e08-25d602c30e34"). InnerVolumeSpecName "kube-api-access-knl97". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.043464 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knl97\" (UniqueName: \"kubernetes.io/projected/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-kube-api-access-knl97\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.043497 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5a66f08-b7bc-4b5e-8e08-25d602c30e34-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.071730 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.078921 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.090134 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.143810 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9977b0-368a-4fd5-997f-760640256681-operator-scripts\") pod \"3a9977b0-368a-4fd5-997f-760640256681\" (UID: \"3a9977b0-368a-4fd5-997f-760640256681\") " Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.143864 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-operator-scripts\") pod \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\" (UID: \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\") " Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.143886 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cks9p\" (UniqueName: \"kubernetes.io/projected/3a9977b0-368a-4fd5-997f-760640256681-kube-api-access-cks9p\") pod \"3a9977b0-368a-4fd5-997f-760640256681\" (UID: \"3a9977b0-368a-4fd5-997f-760640256681\") " Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.143918 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fcb493-2d66-4e54-a21a-bb6f84f68479-operator-scripts\") pod \"37fcb493-2d66-4e54-a21a-bb6f84f68479\" (UID: \"37fcb493-2d66-4e54-a21a-bb6f84f68479\") " Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.143987 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kzqr\" (UniqueName: \"kubernetes.io/projected/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-kube-api-access-7kzqr\") pod \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\" (UID: \"e5fce9db-bc0e-4778-b0be-d08e1b8febcc\") " Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.144029 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79kfl\" (UniqueName: \"kubernetes.io/projected/37fcb493-2d66-4e54-a21a-bb6f84f68479-kube-api-access-79kfl\") pod \"37fcb493-2d66-4e54-a21a-bb6f84f68479\" (UID: \"37fcb493-2d66-4e54-a21a-bb6f84f68479\") " Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.146585 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a9977b0-368a-4fd5-997f-760640256681-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a9977b0-368a-4fd5-997f-760640256681" (UID: "3a9977b0-368a-4fd5-997f-760640256681"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.146604 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5fce9db-bc0e-4778-b0be-d08e1b8febcc" (UID: "e5fce9db-bc0e-4778-b0be-d08e1b8febcc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.146745 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fcb493-2d66-4e54-a21a-bb6f84f68479-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37fcb493-2d66-4e54-a21a-bb6f84f68479" (UID: "37fcb493-2d66-4e54-a21a-bb6f84f68479"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.147571 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9977b0-368a-4fd5-997f-760640256681-kube-api-access-cks9p" (OuterVolumeSpecName: "kube-api-access-cks9p") pod "3a9977b0-368a-4fd5-997f-760640256681" (UID: "3a9977b0-368a-4fd5-997f-760640256681"). InnerVolumeSpecName "kube-api-access-cks9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.148270 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fcb493-2d66-4e54-a21a-bb6f84f68479-kube-api-access-79kfl" (OuterVolumeSpecName: "kube-api-access-79kfl") pod "37fcb493-2d66-4e54-a21a-bb6f84f68479" (UID: "37fcb493-2d66-4e54-a21a-bb6f84f68479"). InnerVolumeSpecName "kube-api-access-79kfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.149771 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-kube-api-access-7kzqr" (OuterVolumeSpecName: "kube-api-access-7kzqr") pod "e5fce9db-bc0e-4778-b0be-d08e1b8febcc" (UID: "e5fce9db-bc0e-4778-b0be-d08e1b8febcc"). InnerVolumeSpecName "kube-api-access-7kzqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.245378 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a9977b0-368a-4fd5-997f-760640256681-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.245416 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.245426 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cks9p\" (UniqueName: \"kubernetes.io/projected/3a9977b0-368a-4fd5-997f-760640256681-kube-api-access-cks9p\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.245437 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37fcb493-2d66-4e54-a21a-bb6f84f68479-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.245447 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kzqr\" (UniqueName: \"kubernetes.io/projected/e5fce9db-bc0e-4778-b0be-d08e1b8febcc-kube-api-access-7kzqr\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.245455 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79kfl\" (UniqueName: \"kubernetes.io/projected/37fcb493-2d66-4e54-a21a-bb6f84f68479-kube-api-access-79kfl\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.527602 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-df7d-account-create-update-ttm7k" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.528519 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-df7d-account-create-update-ttm7k" event={"ID":"c5a66f08-b7bc-4b5e-8e08-25d602c30e34","Type":"ContainerDied","Data":"f4fb790bff37bdcf324bfa4dbb630a519838d3a0916605be41f080a5734ad8a5"} Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.528554 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4fb790bff37bdcf324bfa4dbb630a519838d3a0916605be41f080a5734ad8a5" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.529955 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-6fzk4" event={"ID":"3a9977b0-368a-4fd5-997f-760640256681","Type":"ContainerDied","Data":"4b0d9ac54ee48d03a39a5158deccb9336a3e07ca51286bfec43460f9f329dddd"} Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.529984 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b0d9ac54ee48d03a39a5158deccb9336a3e07ca51286bfec43460f9f329dddd" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.530065 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-6fzk4" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.537030 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-lb9rn" event={"ID":"37fcb493-2d66-4e54-a21a-bb6f84f68479","Type":"ContainerDied","Data":"8d6ccebfef56f741897d9aad800e82b9ea8e36509a89d2587a2e6aca6b0d3573"} Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.537088 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d6ccebfef56f741897d9aad800e82b9ea8e36509a89d2587a2e6aca6b0d3573" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.537098 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-lb9rn" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.538337 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-56c8-account-create-update-wkft2" event={"ID":"e5fce9db-bc0e-4778-b0be-d08e1b8febcc","Type":"ContainerDied","Data":"efd31e6e52634cd0d0aa51ac0bbc136116e0ef34a7ab351278b65232a7ec6520"} Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.538381 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efd31e6e52634cd0d0aa51ac0bbc136116e0ef34a7ab351278b65232a7ec6520" Feb 28 09:19:48 crc kubenswrapper[4996]: I0228 09:19:48.538441 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-56c8-account-create-update-wkft2" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.110050 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dw7wp"] Feb 28 09:19:49 crc kubenswrapper[4996]: E0228 09:19:49.110706 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9977b0-368a-4fd5-997f-760640256681" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.110723 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9977b0-368a-4fd5-997f-760640256681" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: E0228 09:19:49.110740 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fce9db-bc0e-4778-b0be-d08e1b8febcc" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.110749 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fce9db-bc0e-4778-b0be-d08e1b8febcc" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: E0228 09:19:49.110759 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fcb493-2d66-4e54-a21a-bb6f84f68479" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.110766 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fcb493-2d66-4e54-a21a-bb6f84f68479" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: E0228 09:19:49.110788 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a66f08-b7bc-4b5e-8e08-25d602c30e34" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.110796 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a66f08-b7bc-4b5e-8e08-25d602c30e34" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: E0228 09:19:49.110811 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb96126-2830-41c5-9c53-777a164e5e29" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.110819 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb96126-2830-41c5-9c53-777a164e5e29" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: E0228 09:19:49.110829 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edeb8336-9b74-47dd-acb8-22384803c2c6" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.110837 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="edeb8336-9b74-47dd-acb8-22384803c2c6" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: E0228 09:19:49.110851 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de95567a-4315-4687-9b8b-1a94bba6b4c4" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.110858 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="de95567a-4315-4687-9b8b-1a94bba6b4c4" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.111062 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="edeb8336-9b74-47dd-acb8-22384803c2c6" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.111075 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fcb493-2d66-4e54-a21a-bb6f84f68479" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.111090 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a66f08-b7bc-4b5e-8e08-25d602c30e34" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.111103 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fce9db-bc0e-4778-b0be-d08e1b8febcc" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.111115 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="de95567a-4315-4687-9b8b-1a94bba6b4c4" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.111123 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb96126-2830-41c5-9c53-777a164e5e29" containerName="mariadb-account-create-update" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.111134 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a9977b0-368a-4fd5-997f-760640256681" containerName="mariadb-database-create" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.111726 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.116111 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.116144 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7whlv" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.140268 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dw7wp"] Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.267034 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-config-data\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.267103 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-db-sync-config-data\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.267161 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jf7g\" (UniqueName: \"kubernetes.io/projected/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-kube-api-access-8jf7g\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.267179 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-combined-ca-bundle\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.369092 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-db-sync-config-data\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.369197 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jf7g\" (UniqueName: \"kubernetes.io/projected/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-kube-api-access-8jf7g\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.369234 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-combined-ca-bundle\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.369307 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-config-data\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.373363 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-db-sync-config-data\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.377178 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-combined-ca-bundle\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.382597 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-config-data\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.384453 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jf7g\" (UniqueName: \"kubernetes.io/projected/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-kube-api-access-8jf7g\") pod \"glance-db-sync-dw7wp\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.429663 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dw7wp" Feb 28 09:19:49 crc kubenswrapper[4996]: I0228 09:19:49.949905 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dw7wp"] Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.071269 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.394496 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-5f76l"] Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.400310 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-5f76l"] Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.482496 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cmc8x"] Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.483722 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.485692 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.486808 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmwdv\" (UniqueName: \"kubernetes.io/projected/e40da46e-f40e-412f-a3ec-0218e11cd495-kube-api-access-fmwdv\") pod \"root-account-create-update-cmc8x\" (UID: \"e40da46e-f40e-412f-a3ec-0218e11cd495\") " pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.486882 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40da46e-f40e-412f-a3ec-0218e11cd495-operator-scripts\") pod \"root-account-create-update-cmc8x\" (UID: \"e40da46e-f40e-412f-a3ec-0218e11cd495\") " pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.494048 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cmc8x"] Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.557308 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dw7wp" event={"ID":"ca969dbf-f76c-4c52-b619-0c85dd8a7f61","Type":"ContainerStarted","Data":"232d95e8f2ebaa9b6def123df41a663612b0053df9d40b05fdacd0d961fc6143"} Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.587777 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmwdv\" (UniqueName: \"kubernetes.io/projected/e40da46e-f40e-412f-a3ec-0218e11cd495-kube-api-access-fmwdv\") pod \"root-account-create-update-cmc8x\" (UID: \"e40da46e-f40e-412f-a3ec-0218e11cd495\") " pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.587840 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40da46e-f40e-412f-a3ec-0218e11cd495-operator-scripts\") pod \"root-account-create-update-cmc8x\" (UID: \"e40da46e-f40e-412f-a3ec-0218e11cd495\") " pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.588590 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40da46e-f40e-412f-a3ec-0218e11cd495-operator-scripts\") pod \"root-account-create-update-cmc8x\" (UID: \"e40da46e-f40e-412f-a3ec-0218e11cd495\") " pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.624626 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmwdv\" (UniqueName: \"kubernetes.io/projected/e40da46e-f40e-412f-a3ec-0218e11cd495-kube-api-access-fmwdv\") pod \"root-account-create-update-cmc8x\" (UID: \"e40da46e-f40e-412f-a3ec-0218e11cd495\") " pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:50 crc kubenswrapper[4996]: I0228 09:19:50.809417 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:51 crc kubenswrapper[4996]: I0228 09:19:51.041626 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb96126-2830-41c5-9c53-777a164e5e29" path="/var/lib/kubelet/pods/dcb96126-2830-41c5-9c53-777a164e5e29/volumes" Feb 28 09:19:51 crc kubenswrapper[4996]: W0228 09:19:51.268424 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode40da46e_f40e_412f_a3ec_0218e11cd495.slice/crio-9b38a14cf860740979d1f8cd18ffcc764aebd4c2f91e79a2b60a0eebdea2ae39 WatchSource:0}: Error finding container 9b38a14cf860740979d1f8cd18ffcc764aebd4c2f91e79a2b60a0eebdea2ae39: Status 404 returned error can't find the container with id 9b38a14cf860740979d1f8cd18ffcc764aebd4c2f91e79a2b60a0eebdea2ae39 Feb 28 09:19:51 crc kubenswrapper[4996]: I0228 09:19:51.269324 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cmc8x"] Feb 28 09:19:51 crc kubenswrapper[4996]: I0228 09:19:51.565817 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cmc8x" event={"ID":"e40da46e-f40e-412f-a3ec-0218e11cd495","Type":"ContainerStarted","Data":"51a8bec5d86469aeacdb0fa3b83cbf3e46596863470b0d72d93ae684933eea54"} Feb 28 09:19:51 crc kubenswrapper[4996]: I0228 09:19:51.565858 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cmc8x" event={"ID":"e40da46e-f40e-412f-a3ec-0218e11cd495","Type":"ContainerStarted","Data":"9b38a14cf860740979d1f8cd18ffcc764aebd4c2f91e79a2b60a0eebdea2ae39"} Feb 28 09:19:51 crc kubenswrapper[4996]: I0228 09:19:51.582160 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cmc8x" podStartSLOduration=1.5821428229999999 podStartE2EDuration="1.582142823s" podCreationTimestamp="2026-02-28 09:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:19:51.576274092 +0000 UTC m=+1155.267076903" watchObservedRunningTime="2026-02-28 09:19:51.582142823 +0000 UTC m=+1155.272945624" Feb 28 09:19:52 crc kubenswrapper[4996]: I0228 09:19:52.576536 4996 generic.go:334] "Generic (PLEG): container finished" podID="e40da46e-f40e-412f-a3ec-0218e11cd495" containerID="51a8bec5d86469aeacdb0fa3b83cbf3e46596863470b0d72d93ae684933eea54" exitCode=0 Feb 28 09:19:52 crc kubenswrapper[4996]: I0228 09:19:52.576599 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cmc8x" event={"ID":"e40da46e-f40e-412f-a3ec-0218e11cd495","Type":"ContainerDied","Data":"51a8bec5d86469aeacdb0fa3b83cbf3e46596863470b0d72d93ae684933eea54"} Feb 28 09:19:53 crc kubenswrapper[4996]: I0228 09:19:53.923341 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.048394 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmwdv\" (UniqueName: \"kubernetes.io/projected/e40da46e-f40e-412f-a3ec-0218e11cd495-kube-api-access-fmwdv\") pod \"e40da46e-f40e-412f-a3ec-0218e11cd495\" (UID: \"e40da46e-f40e-412f-a3ec-0218e11cd495\") " Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.048543 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40da46e-f40e-412f-a3ec-0218e11cd495-operator-scripts\") pod \"e40da46e-f40e-412f-a3ec-0218e11cd495\" (UID: \"e40da46e-f40e-412f-a3ec-0218e11cd495\") " Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.049464 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e40da46e-f40e-412f-a3ec-0218e11cd495-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e40da46e-f40e-412f-a3ec-0218e11cd495" (UID: "e40da46e-f40e-412f-a3ec-0218e11cd495"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.056236 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40da46e-f40e-412f-a3ec-0218e11cd495-kube-api-access-fmwdv" (OuterVolumeSpecName: "kube-api-access-fmwdv") pod "e40da46e-f40e-412f-a3ec-0218e11cd495" (UID: "e40da46e-f40e-412f-a3ec-0218e11cd495"). InnerVolumeSpecName "kube-api-access-fmwdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.150232 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmwdv\" (UniqueName: \"kubernetes.io/projected/e40da46e-f40e-412f-a3ec-0218e11cd495-kube-api-access-fmwdv\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.150261 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e40da46e-f40e-412f-a3ec-0218e11cd495-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.593088 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cmc8x" event={"ID":"e40da46e-f40e-412f-a3ec-0218e11cd495","Type":"ContainerDied","Data":"9b38a14cf860740979d1f8cd18ffcc764aebd4c2f91e79a2b60a0eebdea2ae39"} Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.593456 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b38a14cf860740979d1f8cd18ffcc764aebd4c2f91e79a2b60a0eebdea2ae39" Feb 28 09:19:54 crc kubenswrapper[4996]: I0228 09:19:54.593143 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cmc8x" Feb 28 09:19:57 crc kubenswrapper[4996]: I0228 09:19:57.615412 4996 generic.go:334] "Generic (PLEG): container finished" podID="7dfcffc8-039f-459c-9f97-d8d595506234" containerID="3ed538687163ff88d97f08dd4b725bdd040099b40e9ac9d1bdbc3e2e3d8f19f4" exitCode=0 Feb 28 09:19:57 crc kubenswrapper[4996]: I0228 09:19:57.615749 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7dfcffc8-039f-459c-9f97-d8d595506234","Type":"ContainerDied","Data":"3ed538687163ff88d97f08dd4b725bdd040099b40e9ac9d1bdbc3e2e3d8f19f4"} Feb 28 09:19:57 crc kubenswrapper[4996]: I0228 09:19:57.619213 4996 generic.go:334] "Generic (PLEG): container finished" podID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerID="a25211526baa7403188021f3fc538e87de9e8c663c7b688919dd5d11965d108c" exitCode=0 Feb 28 09:19:57 crc kubenswrapper[4996]: I0228 09:19:57.619257 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d394b420-eb09-49f3-a92c-32cbed3f63eb","Type":"ContainerDied","Data":"a25211526baa7403188021f3fc538e87de9e8c663c7b688919dd5d11965d108c"} Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.005272 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-6rm4w" podUID="ab34e1ca-2f20-4604-85fa-ca92e0a1ce68" containerName="ovn-controller" probeResult="failure" output=< Feb 28 09:19:59 crc kubenswrapper[4996]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 28 09:19:59 crc kubenswrapper[4996]: > Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.027059 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.050415 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-7lm47" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.242475 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6rm4w-config-8sh4x"] Feb 28 09:19:59 crc kubenswrapper[4996]: E0228 09:19:59.242894 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40da46e-f40e-412f-a3ec-0218e11cd495" containerName="mariadb-account-create-update" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.242916 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40da46e-f40e-412f-a3ec-0218e11cd495" containerName="mariadb-account-create-update" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.243145 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40da46e-f40e-412f-a3ec-0218e11cd495" containerName="mariadb-account-create-update" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.243712 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.245593 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.265155 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6rm4w-config-8sh4x"] Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.437038 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-additional-scripts\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.437082 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.437102 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-log-ovn\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.437160 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run-ovn\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.437244 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbzpj\" (UniqueName: \"kubernetes.io/projected/91f15509-f380-4504-93c5-8dce515f548b-kube-api-access-nbzpj\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.437276 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-scripts\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539213 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run-ovn\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539289 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbzpj\" (UniqueName: \"kubernetes.io/projected/91f15509-f380-4504-93c5-8dce515f548b-kube-api-access-nbzpj\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539324 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-scripts\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539381 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-additional-scripts\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539397 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539411 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-log-ovn\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539543 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run-ovn\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539602 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-log-ovn\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.539654 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.540298 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-additional-scripts\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.541480 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-scripts\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.558452 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbzpj\" (UniqueName: \"kubernetes.io/projected/91f15509-f380-4504-93c5-8dce515f548b-kube-api-access-nbzpj\") pod \"ovn-controller-6rm4w-config-8sh4x\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:19:59 crc kubenswrapper[4996]: I0228 09:19:59.566053 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.126814 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537840-zfw87"] Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.129097 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537840-zfw87" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.131105 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.132049 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.133474 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.135761 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537840-zfw87"] Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.149276 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6pvs\" (UniqueName: \"kubernetes.io/projected/555bf038-cf05-4545-9254-1ef90fad3514-kube-api-access-m6pvs\") pod \"auto-csr-approver-29537840-zfw87\" (UID: \"555bf038-cf05-4545-9254-1ef90fad3514\") " pod="openshift-infra/auto-csr-approver-29537840-zfw87" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.250236 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6pvs\" (UniqueName: \"kubernetes.io/projected/555bf038-cf05-4545-9254-1ef90fad3514-kube-api-access-m6pvs\") pod \"auto-csr-approver-29537840-zfw87\" (UID: \"555bf038-cf05-4545-9254-1ef90fad3514\") " pod="openshift-infra/auto-csr-approver-29537840-zfw87" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.270672 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6pvs\" (UniqueName: \"kubernetes.io/projected/555bf038-cf05-4545-9254-1ef90fad3514-kube-api-access-m6pvs\") pod \"auto-csr-approver-29537840-zfw87\" (UID: \"555bf038-cf05-4545-9254-1ef90fad3514\") " pod="openshift-infra/auto-csr-approver-29537840-zfw87" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.452613 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537840-zfw87" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.670510 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7dfcffc8-039f-459c-9f97-d8d595506234","Type":"ContainerStarted","Data":"2513f5615f3168be7844d6c7b8e824184736476692ac5f97a547ef674f00c3eb"} Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.670838 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.673859 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d394b420-eb09-49f3-a92c-32cbed3f63eb","Type":"ContainerStarted","Data":"926c9ffc7d896509d930bbdde07970546a6e4f3e11ff35c17c6870942134471d"} Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.674377 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.682961 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6rm4w-config-8sh4x"] Feb 28 09:20:00 crc kubenswrapper[4996]: W0228 09:20:00.683034 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91f15509_f380_4504_93c5_8dce515f548b.slice/crio-3279b2b42c1b18301cbc8471636c9d12112a0186c3d55a74b61ad460696fc02d WatchSource:0}: Error finding container 3279b2b42c1b18301cbc8471636c9d12112a0186c3d55a74b61ad460696fc02d: Status 404 returned error can't find the container with id 3279b2b42c1b18301cbc8471636c9d12112a0186c3d55a74b61ad460696fc02d Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.703765 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.204546206 podStartE2EDuration="1m1.703750629s" podCreationTimestamp="2026-02-28 09:18:59 +0000 UTC" firstStartedPulling="2026-02-28 09:19:11.233844195 +0000 UTC m=+1114.924647006" lastFinishedPulling="2026-02-28 09:19:22.733048618 +0000 UTC m=+1126.423851429" observedRunningTime="2026-02-28 09:20:00.696845352 +0000 UTC m=+1164.387648173" watchObservedRunningTime="2026-02-28 09:20:00.703750629 +0000 UTC m=+1164.394553440" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.730838 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.909633997 podStartE2EDuration="1m2.730820361s" podCreationTimestamp="2026-02-28 09:18:58 +0000 UTC" firstStartedPulling="2026-02-28 09:19:11.752174452 +0000 UTC m=+1115.442977263" lastFinishedPulling="2026-02-28 09:19:22.573360816 +0000 UTC m=+1126.264163627" observedRunningTime="2026-02-28 09:20:00.727922001 +0000 UTC m=+1164.418724812" watchObservedRunningTime="2026-02-28 09:20:00.730820361 +0000 UTC m=+1164.421623172" Feb 28 09:20:00 crc kubenswrapper[4996]: I0228 09:20:00.951935 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537840-zfw87"] Feb 28 09:20:01 crc kubenswrapper[4996]: I0228 09:20:01.682656 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dw7wp" event={"ID":"ca969dbf-f76c-4c52-b619-0c85dd8a7f61","Type":"ContainerStarted","Data":"ed87c70d05e7fadf9caffe1331dbdf94e2804fde193dd414a3ff8797955223ac"} Feb 28 09:20:01 crc kubenswrapper[4996]: I0228 09:20:01.684040 4996 generic.go:334] "Generic (PLEG): container finished" podID="91f15509-f380-4504-93c5-8dce515f548b" containerID="1d52320fe6d11c60755fa9079e52f16d6e24e81203ebb939756a2eebe4ab4153" exitCode=0 Feb 28 09:20:01 crc kubenswrapper[4996]: I0228 09:20:01.684143 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w-config-8sh4x" event={"ID":"91f15509-f380-4504-93c5-8dce515f548b","Type":"ContainerDied","Data":"1d52320fe6d11c60755fa9079e52f16d6e24e81203ebb939756a2eebe4ab4153"} Feb 28 09:20:01 crc kubenswrapper[4996]: I0228 09:20:01.684193 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w-config-8sh4x" event={"ID":"91f15509-f380-4504-93c5-8dce515f548b","Type":"ContainerStarted","Data":"3279b2b42c1b18301cbc8471636c9d12112a0186c3d55a74b61ad460696fc02d"} Feb 28 09:20:01 crc kubenswrapper[4996]: I0228 09:20:01.686445 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537840-zfw87" event={"ID":"555bf038-cf05-4545-9254-1ef90fad3514","Type":"ContainerStarted","Data":"e6f5e51b9d62c386704129a2d0a7fd8daed89aa6379f1d55c02f0155219a65eb"} Feb 28 09:20:01 crc kubenswrapper[4996]: I0228 09:20:01.699654 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dw7wp" podStartSLOduration=2.34104007 podStartE2EDuration="12.699634531s" podCreationTimestamp="2026-02-28 09:19:49 +0000 UTC" firstStartedPulling="2026-02-28 09:19:49.955470401 +0000 UTC m=+1153.646273212" lastFinishedPulling="2026-02-28 09:20:00.314064862 +0000 UTC m=+1164.004867673" observedRunningTime="2026-02-28 09:20:01.698125444 +0000 UTC m=+1165.388928265" watchObservedRunningTime="2026-02-28 09:20:01.699634531 +0000 UTC m=+1165.390437342" Feb 28 09:20:02 crc kubenswrapper[4996]: I0228 09:20:02.700811 4996 generic.go:334] "Generic (PLEG): container finished" podID="555bf038-cf05-4545-9254-1ef90fad3514" containerID="4a2a1612129b304cbd3aa21f2aa1f131e06f3684f22a0e6d05b6f46704e293aa" exitCode=0 Feb 28 09:20:02 crc kubenswrapper[4996]: I0228 09:20:02.701305 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537840-zfw87" event={"ID":"555bf038-cf05-4545-9254-1ef90fad3514","Type":"ContainerDied","Data":"4a2a1612129b304cbd3aa21f2aa1f131e06f3684f22a0e6d05b6f46704e293aa"} Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.045183 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.197441 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run-ovn\") pod \"91f15509-f380-4504-93c5-8dce515f548b\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.197796 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run\") pod \"91f15509-f380-4504-93c5-8dce515f548b\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.197550 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "91f15509-f380-4504-93c5-8dce515f548b" (UID: "91f15509-f380-4504-93c5-8dce515f548b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.197845 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-log-ovn\") pod \"91f15509-f380-4504-93c5-8dce515f548b\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.197863 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run" (OuterVolumeSpecName: "var-run") pod "91f15509-f380-4504-93c5-8dce515f548b" (UID: "91f15509-f380-4504-93c5-8dce515f548b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.197955 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "91f15509-f380-4504-93c5-8dce515f548b" (UID: "91f15509-f380-4504-93c5-8dce515f548b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.197991 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbzpj\" (UniqueName: \"kubernetes.io/projected/91f15509-f380-4504-93c5-8dce515f548b-kube-api-access-nbzpj\") pod \"91f15509-f380-4504-93c5-8dce515f548b\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.198047 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-scripts\") pod \"91f15509-f380-4504-93c5-8dce515f548b\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.198081 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-additional-scripts\") pod \"91f15509-f380-4504-93c5-8dce515f548b\" (UID: \"91f15509-f380-4504-93c5-8dce515f548b\") " Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.198437 4996 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.198449 4996 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-run\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.198457 4996 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/91f15509-f380-4504-93c5-8dce515f548b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.198774 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "91f15509-f380-4504-93c5-8dce515f548b" (UID: "91f15509-f380-4504-93c5-8dce515f548b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.199624 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-scripts" (OuterVolumeSpecName: "scripts") pod "91f15509-f380-4504-93c5-8dce515f548b" (UID: "91f15509-f380-4504-93c5-8dce515f548b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.218870 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91f15509-f380-4504-93c5-8dce515f548b-kube-api-access-nbzpj" (OuterVolumeSpecName: "kube-api-access-nbzpj") pod "91f15509-f380-4504-93c5-8dce515f548b" (UID: "91f15509-f380-4504-93c5-8dce515f548b"). InnerVolumeSpecName "kube-api-access-nbzpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.300374 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbzpj\" (UniqueName: \"kubernetes.io/projected/91f15509-f380-4504-93c5-8dce515f548b-kube-api-access-nbzpj\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.300429 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.300458 4996 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/91f15509-f380-4504-93c5-8dce515f548b-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.717244 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w-config-8sh4x" event={"ID":"91f15509-f380-4504-93c5-8dce515f548b","Type":"ContainerDied","Data":"3279b2b42c1b18301cbc8471636c9d12112a0186c3d55a74b61ad460696fc02d"} Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.717311 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w-config-8sh4x" Feb 28 09:20:03 crc kubenswrapper[4996]: I0228 09:20:03.717326 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3279b2b42c1b18301cbc8471636c9d12112a0186c3d55a74b61ad460696fc02d" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.008122 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-6rm4w" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.013271 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537840-zfw87" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.110575 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6pvs\" (UniqueName: \"kubernetes.io/projected/555bf038-cf05-4545-9254-1ef90fad3514-kube-api-access-m6pvs\") pod \"555bf038-cf05-4545-9254-1ef90fad3514\" (UID: \"555bf038-cf05-4545-9254-1ef90fad3514\") " Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.115512 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/555bf038-cf05-4545-9254-1ef90fad3514-kube-api-access-m6pvs" (OuterVolumeSpecName: "kube-api-access-m6pvs") pod "555bf038-cf05-4545-9254-1ef90fad3514" (UID: "555bf038-cf05-4545-9254-1ef90fad3514"). InnerVolumeSpecName "kube-api-access-m6pvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.160461 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6rm4w-config-8sh4x"] Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.167383 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6rm4w-config-8sh4x"] Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.212366 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6pvs\" (UniqueName: \"kubernetes.io/projected/555bf038-cf05-4545-9254-1ef90fad3514-kube-api-access-m6pvs\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.248053 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6rm4w-config-lnkrg"] Feb 28 09:20:04 crc kubenswrapper[4996]: E0228 09:20:04.248470 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91f15509-f380-4504-93c5-8dce515f548b" containerName="ovn-config" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.248495 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="91f15509-f380-4504-93c5-8dce515f548b" containerName="ovn-config" Feb 28 09:20:04 crc kubenswrapper[4996]: E0228 09:20:04.248527 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="555bf038-cf05-4545-9254-1ef90fad3514" containerName="oc" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.248537 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="555bf038-cf05-4545-9254-1ef90fad3514" containerName="oc" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.248709 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="555bf038-cf05-4545-9254-1ef90fad3514" containerName="oc" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.248738 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="91f15509-f380-4504-93c5-8dce515f548b" containerName="ovn-config" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.249397 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.251430 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.259901 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6rm4w-config-lnkrg"] Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.314219 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-scripts\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.314273 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-log-ovn\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.314328 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sppwk\" (UniqueName: \"kubernetes.io/projected/135ddd21-e5d6-465b-aa36-3fff9de095ad-kube-api-access-sppwk\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.314442 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.314466 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-additional-scripts\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.314505 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run-ovn\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.416549 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-scripts\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.416676 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-log-ovn\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.416795 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sppwk\" (UniqueName: \"kubernetes.io/projected/135ddd21-e5d6-465b-aa36-3fff9de095ad-kube-api-access-sppwk\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.416966 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.417040 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-log-ovn\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.417073 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-additional-scripts\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.417163 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.417192 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run-ovn\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.417279 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run-ovn\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.417801 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-additional-scripts\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.418593 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-scripts\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.446433 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sppwk\" (UniqueName: \"kubernetes.io/projected/135ddd21-e5d6-465b-aa36-3fff9de095ad-kube-api-access-sppwk\") pod \"ovn-controller-6rm4w-config-lnkrg\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.567214 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.736043 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537840-zfw87" event={"ID":"555bf038-cf05-4545-9254-1ef90fad3514","Type":"ContainerDied","Data":"e6f5e51b9d62c386704129a2d0a7fd8daed89aa6379f1d55c02f0155219a65eb"} Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.736083 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f5e51b9d62c386704129a2d0a7fd8daed89aa6379f1d55c02f0155219a65eb" Feb 28 09:20:04 crc kubenswrapper[4996]: I0228 09:20:04.736133 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537840-zfw87" Feb 28 09:20:05 crc kubenswrapper[4996]: I0228 09:20:05.052887 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91f15509-f380-4504-93c5-8dce515f548b" path="/var/lib/kubelet/pods/91f15509-f380-4504-93c5-8dce515f548b/volumes" Feb 28 09:20:05 crc kubenswrapper[4996]: I0228 09:20:05.054240 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6rm4w-config-lnkrg"] Feb 28 09:20:05 crc kubenswrapper[4996]: I0228 09:20:05.093960 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537834-lt7m5"] Feb 28 09:20:05 crc kubenswrapper[4996]: I0228 09:20:05.100444 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537834-lt7m5"] Feb 28 09:20:05 crc kubenswrapper[4996]: I0228 09:20:05.742740 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w-config-lnkrg" event={"ID":"135ddd21-e5d6-465b-aa36-3fff9de095ad","Type":"ContainerStarted","Data":"b4ce85c1414d6913330f1610a7dab62169325f26bf8a8ddf181076f51f1d8a61"} Feb 28 09:20:05 crc kubenswrapper[4996]: I0228 09:20:05.742796 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w-config-lnkrg" event={"ID":"135ddd21-e5d6-465b-aa36-3fff9de095ad","Type":"ContainerStarted","Data":"70f043650b8feaff52d3173e2c4d1d69a0452b0ecb19434a33d78f303054f9cd"} Feb 28 09:20:06 crc kubenswrapper[4996]: I0228 09:20:06.783869 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6rm4w-config-lnkrg" podStartSLOduration=2.783849749 podStartE2EDuration="2.783849749s" podCreationTimestamp="2026-02-28 09:20:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:20:06.776873711 +0000 UTC m=+1170.467676512" watchObservedRunningTime="2026-02-28 09:20:06.783849749 +0000 UTC m=+1170.474652560" Feb 28 09:20:07 crc kubenswrapper[4996]: I0228 09:20:07.044938 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d695de3-cc18-423c-bcda-2370449f8479" path="/var/lib/kubelet/pods/3d695de3-cc18-423c-bcda-2370449f8479/volumes" Feb 28 09:20:07 crc kubenswrapper[4996]: I0228 09:20:07.762066 4996 generic.go:334] "Generic (PLEG): container finished" podID="135ddd21-e5d6-465b-aa36-3fff9de095ad" containerID="b4ce85c1414d6913330f1610a7dab62169325f26bf8a8ddf181076f51f1d8a61" exitCode=0 Feb 28 09:20:07 crc kubenswrapper[4996]: I0228 09:20:07.762281 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w-config-lnkrg" event={"ID":"135ddd21-e5d6-465b-aa36-3fff9de095ad","Type":"ContainerDied","Data":"b4ce85c1414d6913330f1610a7dab62169325f26bf8a8ddf181076f51f1d8a61"} Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.065645 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.200316 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run\") pod \"135ddd21-e5d6-465b-aa36-3fff9de095ad\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.200416 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-additional-scripts\") pod \"135ddd21-e5d6-465b-aa36-3fff9de095ad\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.200493 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-log-ovn\") pod \"135ddd21-e5d6-465b-aa36-3fff9de095ad\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.200512 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run-ovn\") pod \"135ddd21-e5d6-465b-aa36-3fff9de095ad\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.200531 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sppwk\" (UniqueName: \"kubernetes.io/projected/135ddd21-e5d6-465b-aa36-3fff9de095ad-kube-api-access-sppwk\") pod \"135ddd21-e5d6-465b-aa36-3fff9de095ad\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.200579 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-scripts\") pod \"135ddd21-e5d6-465b-aa36-3fff9de095ad\" (UID: \"135ddd21-e5d6-465b-aa36-3fff9de095ad\") " Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.201468 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "135ddd21-e5d6-465b-aa36-3fff9de095ad" (UID: "135ddd21-e5d6-465b-aa36-3fff9de095ad"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.201538 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run" (OuterVolumeSpecName: "var-run") pod "135ddd21-e5d6-465b-aa36-3fff9de095ad" (UID: "135ddd21-e5d6-465b-aa36-3fff9de095ad"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.201812 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-scripts" (OuterVolumeSpecName: "scripts") pod "135ddd21-e5d6-465b-aa36-3fff9de095ad" (UID: "135ddd21-e5d6-465b-aa36-3fff9de095ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.202124 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "135ddd21-e5d6-465b-aa36-3fff9de095ad" (UID: "135ddd21-e5d6-465b-aa36-3fff9de095ad"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.202846 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "135ddd21-e5d6-465b-aa36-3fff9de095ad" (UID: "135ddd21-e5d6-465b-aa36-3fff9de095ad"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.206907 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/135ddd21-e5d6-465b-aa36-3fff9de095ad-kube-api-access-sppwk" (OuterVolumeSpecName: "kube-api-access-sppwk") pod "135ddd21-e5d6-465b-aa36-3fff9de095ad" (UID: "135ddd21-e5d6-465b-aa36-3fff9de095ad"). InnerVolumeSpecName "kube-api-access-sppwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.302436 4996 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.302514 4996 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.302542 4996 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.302570 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sppwk\" (UniqueName: \"kubernetes.io/projected/135ddd21-e5d6-465b-aa36-3fff9de095ad-kube-api-access-sppwk\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.302600 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/135ddd21-e5d6-465b-aa36-3fff9de095ad-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.302627 4996 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/135ddd21-e5d6-465b-aa36-3fff9de095ad-var-run\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.782971 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6rm4w-config-lnkrg" event={"ID":"135ddd21-e5d6-465b-aa36-3fff9de095ad","Type":"ContainerDied","Data":"70f043650b8feaff52d3173e2c4d1d69a0452b0ecb19434a33d78f303054f9cd"} Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.783442 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70f043650b8feaff52d3173e2c4d1d69a0452b0ecb19434a33d78f303054f9cd" Feb 28 09:20:09 crc kubenswrapper[4996]: I0228 09:20:09.783053 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6rm4w-config-lnkrg" Feb 28 09:20:10 crc kubenswrapper[4996]: I0228 09:20:10.148318 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6rm4w-config-lnkrg"] Feb 28 09:20:10 crc kubenswrapper[4996]: I0228 09:20:10.155302 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6rm4w-config-lnkrg"] Feb 28 09:20:10 crc kubenswrapper[4996]: I0228 09:20:10.360016 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 28 09:20:10 crc kubenswrapper[4996]: I0228 09:20:10.587399 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Feb 28 09:20:11 crc kubenswrapper[4996]: I0228 09:20:11.048920 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="135ddd21-e5d6-465b-aa36-3fff9de095ad" path="/var/lib/kubelet/pods/135ddd21-e5d6-465b-aa36-3fff9de095ad/volumes" Feb 28 09:20:11 crc kubenswrapper[4996]: I0228 09:20:11.800847 4996 generic.go:334] "Generic (PLEG): container finished" podID="ca969dbf-f76c-4c52-b619-0c85dd8a7f61" containerID="ed87c70d05e7fadf9caffe1331dbdf94e2804fde193dd414a3ff8797955223ac" exitCode=0 Feb 28 09:20:11 crc kubenswrapper[4996]: I0228 09:20:11.800965 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dw7wp" event={"ID":"ca969dbf-f76c-4c52-b619-0c85dd8a7f61","Type":"ContainerDied","Data":"ed87c70d05e7fadf9caffe1331dbdf94e2804fde193dd414a3ff8797955223ac"} Feb 28 09:20:12 crc kubenswrapper[4996]: I0228 09:20:12.249038 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:20:12 crc kubenswrapper[4996]: I0228 09:20:12.249134 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.226563 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dw7wp" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.266131 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-combined-ca-bundle\") pod \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.266337 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jf7g\" (UniqueName: \"kubernetes.io/projected/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-kube-api-access-8jf7g\") pod \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.266407 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-config-data\") pod \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.266473 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-db-sync-config-data\") pod \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\" (UID: \"ca969dbf-f76c-4c52-b619-0c85dd8a7f61\") " Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.274441 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-kube-api-access-8jf7g" (OuterVolumeSpecName: "kube-api-access-8jf7g") pod "ca969dbf-f76c-4c52-b619-0c85dd8a7f61" (UID: "ca969dbf-f76c-4c52-b619-0c85dd8a7f61"). InnerVolumeSpecName "kube-api-access-8jf7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.277518 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ca969dbf-f76c-4c52-b619-0c85dd8a7f61" (UID: "ca969dbf-f76c-4c52-b619-0c85dd8a7f61"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.299805 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca969dbf-f76c-4c52-b619-0c85dd8a7f61" (UID: "ca969dbf-f76c-4c52-b619-0c85dd8a7f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.318751 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-config-data" (OuterVolumeSpecName: "config-data") pod "ca969dbf-f76c-4c52-b619-0c85dd8a7f61" (UID: "ca969dbf-f76c-4c52-b619-0c85dd8a7f61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.368427 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jf7g\" (UniqueName: \"kubernetes.io/projected/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-kube-api-access-8jf7g\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.368461 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.368473 4996 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.368483 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca969dbf-f76c-4c52-b619-0c85dd8a7f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.822664 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dw7wp" event={"ID":"ca969dbf-f76c-4c52-b619-0c85dd8a7f61","Type":"ContainerDied","Data":"232d95e8f2ebaa9b6def123df41a663612b0053df9d40b05fdacd0d961fc6143"} Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.823093 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="232d95e8f2ebaa9b6def123df41a663612b0053df9d40b05fdacd0d961fc6143" Feb 28 09:20:13 crc kubenswrapper[4996]: I0228 09:20:13.822712 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dw7wp" Feb 28 09:20:13 crc kubenswrapper[4996]: E0228 09:20:13.994331 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca969dbf_f76c_4c52_b619_0c85dd8a7f61.slice\": RecentStats: unable to find data in memory cache]" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.201283 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4cm87"] Feb 28 09:20:14 crc kubenswrapper[4996]: E0228 09:20:14.201607 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca969dbf-f76c-4c52-b619-0c85dd8a7f61" containerName="glance-db-sync" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.201623 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca969dbf-f76c-4c52-b619-0c85dd8a7f61" containerName="glance-db-sync" Feb 28 09:20:14 crc kubenswrapper[4996]: E0228 09:20:14.201635 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="135ddd21-e5d6-465b-aa36-3fff9de095ad" containerName="ovn-config" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.201642 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="135ddd21-e5d6-465b-aa36-3fff9de095ad" containerName="ovn-config" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.201794 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="135ddd21-e5d6-465b-aa36-3fff9de095ad" containerName="ovn-config" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.201810 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca969dbf-f76c-4c52-b619-0c85dd8a7f61" containerName="glance-db-sync" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.202542 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.215356 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4cm87"] Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.289883 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.289951 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7q8d\" (UniqueName: \"kubernetes.io/projected/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-kube-api-access-v7q8d\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.290045 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-config\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.290065 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.290084 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.391038 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.391087 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.391117 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.391161 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7q8d\" (UniqueName: \"kubernetes.io/projected/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-kube-api-access-v7q8d\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.391236 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-config\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.392129 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-config\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.392148 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.392125 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.392452 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.409277 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7q8d\" (UniqueName: \"kubernetes.io/projected/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-kube-api-access-v7q8d\") pod \"dnsmasq-dns-54f9b7b8d9-4cm87\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:14 crc kubenswrapper[4996]: I0228 09:20:14.532612 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:15 crc kubenswrapper[4996]: I0228 09:20:15.047994 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4cm87"] Feb 28 09:20:15 crc kubenswrapper[4996]: I0228 09:20:15.838509 4996 generic.go:334] "Generic (PLEG): container finished" podID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerID="7666b7fb4d485808a56ff4b76748465078c410d162e2121e6a0a3ecb40142409" exitCode=0 Feb 28 09:20:15 crc kubenswrapper[4996]: I0228 09:20:15.838568 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" event={"ID":"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1","Type":"ContainerDied","Data":"7666b7fb4d485808a56ff4b76748465078c410d162e2121e6a0a3ecb40142409"} Feb 28 09:20:15 crc kubenswrapper[4996]: I0228 09:20:15.838729 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" event={"ID":"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1","Type":"ContainerStarted","Data":"c5ee502e15374de39bb4e42c4b320f12ce48b2b5fa740350cc99cd9c7426e423"} Feb 28 09:20:16 crc kubenswrapper[4996]: I0228 09:20:16.848483 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" event={"ID":"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1","Type":"ContainerStarted","Data":"f2b6b958a5535cc232df69e838ca026c8411fe969cc4f19388d1d7fd7f208fea"} Feb 28 09:20:16 crc kubenswrapper[4996]: I0228 09:20:16.848751 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:16 crc kubenswrapper[4996]: I0228 09:20:16.869861 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" podStartSLOduration=2.869843161 podStartE2EDuration="2.869843161s" podCreationTimestamp="2026-02-28 09:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:20:16.864787358 +0000 UTC m=+1180.555590179" watchObservedRunningTime="2026-02-28 09:20:16.869843161 +0000 UTC m=+1180.560645972" Feb 28 09:20:20 crc kubenswrapper[4996]: I0228 09:20:20.360347 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:20:20 crc kubenswrapper[4996]: I0228 09:20:20.586711 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.247688 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-q6mpl"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.248936 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.273770 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-q6mpl"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.316253 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhs6c\" (UniqueName: \"kubernetes.io/projected/7daa6505-5dd3-48bd-bba3-18c707ea38ed-kube-api-access-rhs6c\") pod \"cinder-db-create-q6mpl\" (UID: \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\") " pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.316324 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7daa6505-5dd3-48bd-bba3-18c707ea38ed-operator-scripts\") pod \"cinder-db-create-q6mpl\" (UID: \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\") " pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.352811 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5882-account-create-update-s4s2j"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.353786 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.355866 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.361427 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5882-account-create-update-s4s2j"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.421761 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhs6c\" (UniqueName: \"kubernetes.io/projected/7daa6505-5dd3-48bd-bba3-18c707ea38ed-kube-api-access-rhs6c\") pod \"cinder-db-create-q6mpl\" (UID: \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\") " pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.422168 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/651f6398-d609-4a58-9b97-4bff8aff24cd-operator-scripts\") pod \"cinder-5882-account-create-update-s4s2j\" (UID: \"651f6398-d609-4a58-9b97-4bff8aff24cd\") " pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.422207 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7daa6505-5dd3-48bd-bba3-18c707ea38ed-operator-scripts\") pod \"cinder-db-create-q6mpl\" (UID: \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\") " pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.422241 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hwk9\" (UniqueName: \"kubernetes.io/projected/651f6398-d609-4a58-9b97-4bff8aff24cd-kube-api-access-2hwk9\") pod \"cinder-5882-account-create-update-s4s2j\" (UID: \"651f6398-d609-4a58-9b97-4bff8aff24cd\") " pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.423165 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7daa6505-5dd3-48bd-bba3-18c707ea38ed-operator-scripts\") pod \"cinder-db-create-q6mpl\" (UID: \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\") " pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.456060 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-m6m58"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.457167 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.458529 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhs6c\" (UniqueName: \"kubernetes.io/projected/7daa6505-5dd3-48bd-bba3-18c707ea38ed-kube-api-access-rhs6c\") pod \"cinder-db-create-q6mpl\" (UID: \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\") " pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.478109 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m6m58"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.510464 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-wxpd2"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.511624 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.514537 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.514741 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.514772 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mxpf8" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.514903 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.524428 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hwk9\" (UniqueName: \"kubernetes.io/projected/651f6398-d609-4a58-9b97-4bff8aff24cd-kube-api-access-2hwk9\") pod \"cinder-5882-account-create-update-s4s2j\" (UID: \"651f6398-d609-4a58-9b97-4bff8aff24cd\") " pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.524728 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wxpd2"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.525220 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/651f6398-d609-4a58-9b97-4bff8aff24cd-operator-scripts\") pod \"cinder-5882-account-create-update-s4s2j\" (UID: \"651f6398-d609-4a58-9b97-4bff8aff24cd\") " pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.525897 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/651f6398-d609-4a58-9b97-4bff8aff24cd-operator-scripts\") pod \"cinder-5882-account-create-update-s4s2j\" (UID: \"651f6398-d609-4a58-9b97-4bff8aff24cd\") " pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.558725 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-bgwbh"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.560352 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.567163 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.567530 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bgwbh"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.570029 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hwk9\" (UniqueName: \"kubernetes.io/projected/651f6398-d609-4a58-9b97-4bff8aff24cd-kube-api-access-2hwk9\") pod \"cinder-5882-account-create-update-s4s2j\" (UID: \"651f6398-d609-4a58-9b97-4bff8aff24cd\") " pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.626369 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzjs9\" (UniqueName: \"kubernetes.io/projected/fe096774-7016-4037-93fb-e0154de207ba-kube-api-access-fzjs9\") pod \"barbican-db-create-m6m58\" (UID: \"fe096774-7016-4037-93fb-e0154de207ba\") " pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.626413 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe096774-7016-4037-93fb-e0154de207ba-operator-scripts\") pod \"barbican-db-create-m6m58\" (UID: \"fe096774-7016-4037-93fb-e0154de207ba\") " pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.626442 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-config-data\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.626508 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-combined-ca-bundle\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.626538 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8fr\" (UniqueName: \"kubernetes.io/projected/2620062d-d286-4cae-b123-b53cf5c9f71a-kube-api-access-kv8fr\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.661142 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-aabe-account-create-update-lnvhh"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.662297 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.665257 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.675355 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-aabe-account-create-update-lnvhh"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.727904 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-combined-ca-bundle\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.727952 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv8fr\" (UniqueName: \"kubernetes.io/projected/2620062d-d286-4cae-b123-b53cf5c9f71a-kube-api-access-kv8fr\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.727995 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5696d56-384e-47a4-bff3-cc4a07264817-operator-scripts\") pod \"neutron-db-create-bgwbh\" (UID: \"d5696d56-384e-47a4-bff3-cc4a07264817\") " pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.728051 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzjs9\" (UniqueName: \"kubernetes.io/projected/fe096774-7016-4037-93fb-e0154de207ba-kube-api-access-fzjs9\") pod \"barbican-db-create-m6m58\" (UID: \"fe096774-7016-4037-93fb-e0154de207ba\") " pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.728067 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe096774-7016-4037-93fb-e0154de207ba-operator-scripts\") pod \"barbican-db-create-m6m58\" (UID: \"fe096774-7016-4037-93fb-e0154de207ba\") " pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.728090 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-config-data\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.728121 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4hvf\" (UniqueName: \"kubernetes.io/projected/d5696d56-384e-47a4-bff3-cc4a07264817-kube-api-access-q4hvf\") pod \"neutron-db-create-bgwbh\" (UID: \"d5696d56-384e-47a4-bff3-cc4a07264817\") " pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.729797 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe096774-7016-4037-93fb-e0154de207ba-operator-scripts\") pod \"barbican-db-create-m6m58\" (UID: \"fe096774-7016-4037-93fb-e0154de207ba\") " pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.730675 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.739965 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-config-data\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.740933 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-combined-ca-bundle\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.755330 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv8fr\" (UniqueName: \"kubernetes.io/projected/2620062d-d286-4cae-b123-b53cf5c9f71a-kube-api-access-kv8fr\") pod \"keystone-db-sync-wxpd2\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.761538 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-247f-account-create-update-spp2j"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.770886 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.771278 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzjs9\" (UniqueName: \"kubernetes.io/projected/fe096774-7016-4037-93fb-e0154de207ba-kube-api-access-fzjs9\") pod \"barbican-db-create-m6m58\" (UID: \"fe096774-7016-4037-93fb-e0154de207ba\") " pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.774450 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.782555 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-247f-account-create-update-spp2j"] Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.809080 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.830038 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4hvf\" (UniqueName: \"kubernetes.io/projected/d5696d56-384e-47a4-bff3-cc4a07264817-kube-api-access-q4hvf\") pod \"neutron-db-create-bgwbh\" (UID: \"d5696d56-384e-47a4-bff3-cc4a07264817\") " pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.830094 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef240be-40f4-4734-81bb-95b0b99a83b7-operator-scripts\") pod \"neutron-aabe-account-create-update-lnvhh\" (UID: \"7ef240be-40f4-4734-81bb-95b0b99a83b7\") " pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.830150 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswl6\" (UniqueName: \"kubernetes.io/projected/7ef240be-40f4-4734-81bb-95b0b99a83b7-kube-api-access-lswl6\") pod \"neutron-aabe-account-create-update-lnvhh\" (UID: \"7ef240be-40f4-4734-81bb-95b0b99a83b7\") " pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.830225 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5696d56-384e-47a4-bff3-cc4a07264817-operator-scripts\") pod \"neutron-db-create-bgwbh\" (UID: \"d5696d56-384e-47a4-bff3-cc4a07264817\") " pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.830959 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5696d56-384e-47a4-bff3-cc4a07264817-operator-scripts\") pod \"neutron-db-create-bgwbh\" (UID: \"d5696d56-384e-47a4-bff3-cc4a07264817\") " pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.832962 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.849725 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4hvf\" (UniqueName: \"kubernetes.io/projected/d5696d56-384e-47a4-bff3-cc4a07264817-kube-api-access-q4hvf\") pod \"neutron-db-create-bgwbh\" (UID: \"d5696d56-384e-47a4-bff3-cc4a07264817\") " pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.933868 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef240be-40f4-4734-81bb-95b0b99a83b7-operator-scripts\") pod \"neutron-aabe-account-create-update-lnvhh\" (UID: \"7ef240be-40f4-4734-81bb-95b0b99a83b7\") " pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.933905 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4gfk\" (UniqueName: \"kubernetes.io/projected/c05a8ea6-88b3-4771-91db-da109123131d-kube-api-access-k4gfk\") pod \"barbican-247f-account-create-update-spp2j\" (UID: \"c05a8ea6-88b3-4771-91db-da109123131d\") " pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.933951 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lswl6\" (UniqueName: \"kubernetes.io/projected/7ef240be-40f4-4734-81bb-95b0b99a83b7-kube-api-access-lswl6\") pod \"neutron-aabe-account-create-update-lnvhh\" (UID: \"7ef240be-40f4-4734-81bb-95b0b99a83b7\") " pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.934027 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c05a8ea6-88b3-4771-91db-da109123131d-operator-scripts\") pod \"barbican-247f-account-create-update-spp2j\" (UID: \"c05a8ea6-88b3-4771-91db-da109123131d\") " pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.934694 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef240be-40f4-4734-81bb-95b0b99a83b7-operator-scripts\") pod \"neutron-aabe-account-create-update-lnvhh\" (UID: \"7ef240be-40f4-4734-81bb-95b0b99a83b7\") " pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.949479 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.963812 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lswl6\" (UniqueName: \"kubernetes.io/projected/7ef240be-40f4-4734-81bb-95b0b99a83b7-kube-api-access-lswl6\") pod \"neutron-aabe-account-create-update-lnvhh\" (UID: \"7ef240be-40f4-4734-81bb-95b0b99a83b7\") " pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:22 crc kubenswrapper[4996]: I0228 09:20:22.980109 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.036203 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c05a8ea6-88b3-4771-91db-da109123131d-operator-scripts\") pod \"barbican-247f-account-create-update-spp2j\" (UID: \"c05a8ea6-88b3-4771-91db-da109123131d\") " pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.036297 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4gfk\" (UniqueName: \"kubernetes.io/projected/c05a8ea6-88b3-4771-91db-da109123131d-kube-api-access-k4gfk\") pod \"barbican-247f-account-create-update-spp2j\" (UID: \"c05a8ea6-88b3-4771-91db-da109123131d\") " pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.037364 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c05a8ea6-88b3-4771-91db-da109123131d-operator-scripts\") pod \"barbican-247f-account-create-update-spp2j\" (UID: \"c05a8ea6-88b3-4771-91db-da109123131d\") " pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.063633 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4gfk\" (UniqueName: \"kubernetes.io/projected/c05a8ea6-88b3-4771-91db-da109123131d-kube-api-access-k4gfk\") pod \"barbican-247f-account-create-update-spp2j\" (UID: \"c05a8ea6-88b3-4771-91db-da109123131d\") " pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.070263 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5882-account-create-update-s4s2j"] Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.097502 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-q6mpl"] Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.117659 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:23 crc kubenswrapper[4996]: W0228 09:20:23.143033 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7daa6505_5dd3_48bd_bba3_18c707ea38ed.slice/crio-03fd76d0c68f5c9bae5b848e643473a6c17fb24414028d5c711c76c6722a62e8 WatchSource:0}: Error finding container 03fd76d0c68f5c9bae5b848e643473a6c17fb24414028d5c711c76c6722a62e8: Status 404 returned error can't find the container with id 03fd76d0c68f5c9bae5b848e643473a6c17fb24414028d5c711c76c6722a62e8 Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.324436 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-m6m58"] Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.396436 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-wxpd2"] Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.402179 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.522936 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-bgwbh"] Feb 28 09:20:23 crc kubenswrapper[4996]: W0228 09:20:23.530714 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5696d56_384e_47a4_bff3_cc4a07264817.slice/crio-fedc5f58a345c0d06e4bb5ba8dd3ebb33883698bbf68cf2d34dcc0227541d2f4 WatchSource:0}: Error finding container fedc5f58a345c0d06e4bb5ba8dd3ebb33883698bbf68cf2d34dcc0227541d2f4: Status 404 returned error can't find the container with id fedc5f58a345c0d06e4bb5ba8dd3ebb33883698bbf68cf2d34dcc0227541d2f4 Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.597260 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-aabe-account-create-update-lnvhh"] Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.681583 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-247f-account-create-update-spp2j"] Feb 28 09:20:23 crc kubenswrapper[4996]: W0228 09:20:23.783796 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc05a8ea6_88b3_4771_91db_da109123131d.slice/crio-9de01d1315172e106358bdc985f2208836fe308a4a90e929c6ce147eef3e612d WatchSource:0}: Error finding container 9de01d1315172e106358bdc985f2208836fe308a4a90e929c6ce147eef3e612d: Status 404 returned error can't find the container with id 9de01d1315172e106358bdc985f2208836fe308a4a90e929c6ce147eef3e612d Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.908263 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m6m58" event={"ID":"fe096774-7016-4037-93fb-e0154de207ba","Type":"ContainerStarted","Data":"4d748a411cd8fc0be453f8d30296804bd3d2ad631c2bb72621290c6e107ec6c0"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.908307 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m6m58" event={"ID":"fe096774-7016-4037-93fb-e0154de207ba","Type":"ContainerStarted","Data":"50fd8b722d10c0736792e0ca2f8d4ef77e80f025957a9197af27f9f9692ecf1f"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.910927 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aabe-account-create-update-lnvhh" event={"ID":"7ef240be-40f4-4734-81bb-95b0b99a83b7","Type":"ContainerStarted","Data":"cb9b367bd3be7e6995720903df8aa53ea1fbb7df65852f2a2174475b9585fc8a"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.912601 4996 generic.go:334] "Generic (PLEG): container finished" podID="7daa6505-5dd3-48bd-bba3-18c707ea38ed" containerID="7bb00be0a06875039914d5a28d12b673a5d93c25289d55b6a3d0303ff3528922" exitCode=0 Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.912722 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q6mpl" event={"ID":"7daa6505-5dd3-48bd-bba3-18c707ea38ed","Type":"ContainerDied","Data":"7bb00be0a06875039914d5a28d12b673a5d93c25289d55b6a3d0303ff3528922"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.912745 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q6mpl" event={"ID":"7daa6505-5dd3-48bd-bba3-18c707ea38ed","Type":"ContainerStarted","Data":"03fd76d0c68f5c9bae5b848e643473a6c17fb24414028d5c711c76c6722a62e8"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.913840 4996 generic.go:334] "Generic (PLEG): container finished" podID="651f6398-d609-4a58-9b97-4bff8aff24cd" containerID="fec0214ab90dd3d59c492ea9160c63057276368d55fcab40c560349b98028e8b" exitCode=0 Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.913884 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5882-account-create-update-s4s2j" event={"ID":"651f6398-d609-4a58-9b97-4bff8aff24cd","Type":"ContainerDied","Data":"fec0214ab90dd3d59c492ea9160c63057276368d55fcab40c560349b98028e8b"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.913900 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5882-account-create-update-s4s2j" event={"ID":"651f6398-d609-4a58-9b97-4bff8aff24cd","Type":"ContainerStarted","Data":"adcf65b610fb03877f702ac089c35d56a6ac83ede836852813f8c1ad65e21b84"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.914701 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wxpd2" event={"ID":"2620062d-d286-4cae-b123-b53cf5c9f71a","Type":"ContainerStarted","Data":"74dfe50494261d69680a9584a7fb4435640c488a51dd9965c0a7194b79b6405f"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.915438 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bgwbh" event={"ID":"d5696d56-384e-47a4-bff3-cc4a07264817","Type":"ContainerStarted","Data":"fedc5f58a345c0d06e4bb5ba8dd3ebb33883698bbf68cf2d34dcc0227541d2f4"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.916305 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-247f-account-create-update-spp2j" event={"ID":"c05a8ea6-88b3-4771-91db-da109123131d","Type":"ContainerStarted","Data":"9de01d1315172e106358bdc985f2208836fe308a4a90e929c6ce147eef3e612d"} Feb 28 09:20:23 crc kubenswrapper[4996]: I0228 09:20:23.925865 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-m6m58" podStartSLOduration=1.925850481 podStartE2EDuration="1.925850481s" podCreationTimestamp="2026-02-28 09:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:20:23.925223876 +0000 UTC m=+1187.616026677" watchObservedRunningTime="2026-02-28 09:20:23.925850481 +0000 UTC m=+1187.616653282" Feb 28 09:20:24 crc kubenswrapper[4996]: E0228 09:20:24.210813 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe096774_7016_4037_93fb_e0154de207ba.slice/crio-4d748a411cd8fc0be453f8d30296804bd3d2ad631c2bb72621290c6e107ec6c0.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.535102 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.604138 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fgmfq"] Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.604367 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" podUID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" containerName="dnsmasq-dns" containerID="cri-o://7dcb86fbfeb76d775fc5d98d7fc7aa47f5aa531114adf9f386cf2ad20d2e808e" gracePeriod=10 Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.950919 4996 generic.go:334] "Generic (PLEG): container finished" podID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" containerID="7dcb86fbfeb76d775fc5d98d7fc7aa47f5aa531114adf9f386cf2ad20d2e808e" exitCode=0 Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.950976 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" event={"ID":"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff","Type":"ContainerDied","Data":"7dcb86fbfeb76d775fc5d98d7fc7aa47f5aa531114adf9f386cf2ad20d2e808e"} Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.958481 4996 generic.go:334] "Generic (PLEG): container finished" podID="d5696d56-384e-47a4-bff3-cc4a07264817" containerID="9240a290b363b794b32436d5b3be28e3a2db690532e7124f014b0fb37788f548" exitCode=0 Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.958618 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bgwbh" event={"ID":"d5696d56-384e-47a4-bff3-cc4a07264817","Type":"ContainerDied","Data":"9240a290b363b794b32436d5b3be28e3a2db690532e7124f014b0fb37788f548"} Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.962673 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-247f-account-create-update-spp2j" event={"ID":"c05a8ea6-88b3-4771-91db-da109123131d","Type":"ContainerStarted","Data":"5785386447ca3373e5481d45f92674f1917704bef61d2860d83804684e07b3db"} Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.978552 4996 generic.go:334] "Generic (PLEG): container finished" podID="fe096774-7016-4037-93fb-e0154de207ba" containerID="4d748a411cd8fc0be453f8d30296804bd3d2ad631c2bb72621290c6e107ec6c0" exitCode=0 Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.978603 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m6m58" event={"ID":"fe096774-7016-4037-93fb-e0154de207ba","Type":"ContainerDied","Data":"4d748a411cd8fc0be453f8d30296804bd3d2ad631c2bb72621290c6e107ec6c0"} Feb 28 09:20:24 crc kubenswrapper[4996]: I0228 09:20:24.980348 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aabe-account-create-update-lnvhh" event={"ID":"7ef240be-40f4-4734-81bb-95b0b99a83b7","Type":"ContainerStarted","Data":"7a37adb861d87998a7154f36713ec32333e314142466d100f8cfb1d451814313"} Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.271191 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.390118 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-sb\") pod \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.390161 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-config\") pod \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.390196 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-dns-svc\") pod \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.390334 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-nb\") pod \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.390449 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj4l9\" (UniqueName: \"kubernetes.io/projected/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-kube-api-access-xj4l9\") pod \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\" (UID: \"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.405313 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.420165 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-kube-api-access-xj4l9" (OuterVolumeSpecName: "kube-api-access-xj4l9") pod "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" (UID: "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff"). InnerVolumeSpecName "kube-api-access-xj4l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.452965 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" (UID: "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.454747 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-config" (OuterVolumeSpecName: "config") pod "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" (UID: "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.466522 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" (UID: "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.472924 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.473383 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" (UID: "e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.492372 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.492411 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.492425 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.492436 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.492447 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj4l9\" (UniqueName: \"kubernetes.io/projected/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff-kube-api-access-xj4l9\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.593195 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhs6c\" (UniqueName: \"kubernetes.io/projected/7daa6505-5dd3-48bd-bba3-18c707ea38ed-kube-api-access-rhs6c\") pod \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\" (UID: \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.593282 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/651f6398-d609-4a58-9b97-4bff8aff24cd-operator-scripts\") pod \"651f6398-d609-4a58-9b97-4bff8aff24cd\" (UID: \"651f6398-d609-4a58-9b97-4bff8aff24cd\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.593435 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7daa6505-5dd3-48bd-bba3-18c707ea38ed-operator-scripts\") pod \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\" (UID: \"7daa6505-5dd3-48bd-bba3-18c707ea38ed\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.594075 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hwk9\" (UniqueName: \"kubernetes.io/projected/651f6398-d609-4a58-9b97-4bff8aff24cd-kube-api-access-2hwk9\") pod \"651f6398-d609-4a58-9b97-4bff8aff24cd\" (UID: \"651f6398-d609-4a58-9b97-4bff8aff24cd\") " Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.594218 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651f6398-d609-4a58-9b97-4bff8aff24cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "651f6398-d609-4a58-9b97-4bff8aff24cd" (UID: "651f6398-d609-4a58-9b97-4bff8aff24cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.594298 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7daa6505-5dd3-48bd-bba3-18c707ea38ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7daa6505-5dd3-48bd-bba3-18c707ea38ed" (UID: "7daa6505-5dd3-48bd-bba3-18c707ea38ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.594719 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7daa6505-5dd3-48bd-bba3-18c707ea38ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.594754 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/651f6398-d609-4a58-9b97-4bff8aff24cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.596784 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7daa6505-5dd3-48bd-bba3-18c707ea38ed-kube-api-access-rhs6c" (OuterVolumeSpecName: "kube-api-access-rhs6c") pod "7daa6505-5dd3-48bd-bba3-18c707ea38ed" (UID: "7daa6505-5dd3-48bd-bba3-18c707ea38ed"). InnerVolumeSpecName "kube-api-access-rhs6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.597222 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651f6398-d609-4a58-9b97-4bff8aff24cd-kube-api-access-2hwk9" (OuterVolumeSpecName: "kube-api-access-2hwk9") pod "651f6398-d609-4a58-9b97-4bff8aff24cd" (UID: "651f6398-d609-4a58-9b97-4bff8aff24cd"). InnerVolumeSpecName "kube-api-access-2hwk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.696505 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hwk9\" (UniqueName: \"kubernetes.io/projected/651f6398-d609-4a58-9b97-4bff8aff24cd-kube-api-access-2hwk9\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:25 crc kubenswrapper[4996]: I0228 09:20:25.696535 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhs6c\" (UniqueName: \"kubernetes.io/projected/7daa6505-5dd3-48bd-bba3-18c707ea38ed-kube-api-access-rhs6c\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.008851 4996 generic.go:334] "Generic (PLEG): container finished" podID="7ef240be-40f4-4734-81bb-95b0b99a83b7" containerID="7a37adb861d87998a7154f36713ec32333e314142466d100f8cfb1d451814313" exitCode=0 Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.009121 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aabe-account-create-update-lnvhh" event={"ID":"7ef240be-40f4-4734-81bb-95b0b99a83b7","Type":"ContainerDied","Data":"7a37adb861d87998a7154f36713ec32333e314142466d100f8cfb1d451814313"} Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.013606 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-q6mpl" event={"ID":"7daa6505-5dd3-48bd-bba3-18c707ea38ed","Type":"ContainerDied","Data":"03fd76d0c68f5c9bae5b848e643473a6c17fb24414028d5c711c76c6722a62e8"} Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.013669 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03fd76d0c68f5c9bae5b848e643473a6c17fb24414028d5c711c76c6722a62e8" Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.013680 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-q6mpl" Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.017694 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5882-account-create-update-s4s2j" Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.017712 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5882-account-create-update-s4s2j" event={"ID":"651f6398-d609-4a58-9b97-4bff8aff24cd","Type":"ContainerDied","Data":"adcf65b610fb03877f702ac089c35d56a6ac83ede836852813f8c1ad65e21b84"} Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.017742 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adcf65b610fb03877f702ac089c35d56a6ac83ede836852813f8c1ad65e21b84" Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.019859 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" event={"ID":"e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff","Type":"ContainerDied","Data":"45bc67656c0b0b60414247d8ae3298433e9975ddb8e4a182b564c57cdf9c41e2"} Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.019871 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fgmfq" Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.019888 4996 scope.go:117] "RemoveContainer" containerID="7dcb86fbfeb76d775fc5d98d7fc7aa47f5aa531114adf9f386cf2ad20d2e808e" Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.022235 4996 generic.go:334] "Generic (PLEG): container finished" podID="c05a8ea6-88b3-4771-91db-da109123131d" containerID="5785386447ca3373e5481d45f92674f1917704bef61d2860d83804684e07b3db" exitCode=0 Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.022305 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-247f-account-create-update-spp2j" event={"ID":"c05a8ea6-88b3-4771-91db-da109123131d","Type":"ContainerDied","Data":"5785386447ca3373e5481d45f92674f1917704bef61d2860d83804684e07b3db"} Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.077337 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fgmfq"] Feb 28 09:20:26 crc kubenswrapper[4996]: I0228 09:20:26.087841 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fgmfq"] Feb 28 09:20:27 crc kubenswrapper[4996]: I0228 09:20:27.051302 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" path="/var/lib/kubelet/pods/e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff/volumes" Feb 28 09:20:28 crc kubenswrapper[4996]: I0228 09:20:28.922886 4996 scope.go:117] "RemoveContainer" containerID="b18dd1b43f7a6974dc8fcf3cb44679f9c6afac1412238ddad71ba5897d1a1dc3" Feb 28 09:20:28 crc kubenswrapper[4996]: I0228 09:20:28.981552 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.062520 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-aabe-account-create-update-lnvhh" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.068704 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-bgwbh" event={"ID":"d5696d56-384e-47a4-bff3-cc4a07264817","Type":"ContainerDied","Data":"fedc5f58a345c0d06e4bb5ba8dd3ebb33883698bbf68cf2d34dcc0227541d2f4"} Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.068738 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fedc5f58a345c0d06e4bb5ba8dd3ebb33883698bbf68cf2d34dcc0227541d2f4" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.068750 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-247f-account-create-update-spp2j" event={"ID":"c05a8ea6-88b3-4771-91db-da109123131d","Type":"ContainerDied","Data":"9de01d1315172e106358bdc985f2208836fe308a4a90e929c6ce147eef3e612d"} Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.068760 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9de01d1315172e106358bdc985f2208836fe308a4a90e929c6ce147eef3e612d" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.068769 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-m6m58" event={"ID":"fe096774-7016-4037-93fb-e0154de207ba","Type":"ContainerDied","Data":"50fd8b722d10c0736792e0ca2f8d4ef77e80f025957a9197af27f9f9692ecf1f"} Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.068777 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50fd8b722d10c0736792e0ca2f8d4ef77e80f025957a9197af27f9f9692ecf1f" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.068784 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-aabe-account-create-update-lnvhh" event={"ID":"7ef240be-40f4-4734-81bb-95b0b99a83b7","Type":"ContainerDied","Data":"cb9b367bd3be7e6995720903df8aa53ea1fbb7df65852f2a2174475b9585fc8a"} Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.068794 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb9b367bd3be7e6995720903df8aa53ea1fbb7df65852f2a2174475b9585fc8a" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.074712 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.080350 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.091664 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lswl6\" (UniqueName: \"kubernetes.io/projected/7ef240be-40f4-4734-81bb-95b0b99a83b7-kube-api-access-lswl6\") pod \"7ef240be-40f4-4734-81bb-95b0b99a83b7\" (UID: \"7ef240be-40f4-4734-81bb-95b0b99a83b7\") " Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.091893 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef240be-40f4-4734-81bb-95b0b99a83b7-operator-scripts\") pod \"7ef240be-40f4-4734-81bb-95b0b99a83b7\" (UID: \"7ef240be-40f4-4734-81bb-95b0b99a83b7\") " Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.092897 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef240be-40f4-4734-81bb-95b0b99a83b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ef240be-40f4-4734-81bb-95b0b99a83b7" (UID: "7ef240be-40f4-4734-81bb-95b0b99a83b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.096511 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef240be-40f4-4734-81bb-95b0b99a83b7-kube-api-access-lswl6" (OuterVolumeSpecName: "kube-api-access-lswl6") pod "7ef240be-40f4-4734-81bb-95b0b99a83b7" (UID: "7ef240be-40f4-4734-81bb-95b0b99a83b7"). InnerVolumeSpecName "kube-api-access-lswl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.110195 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.192860 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe096774-7016-4037-93fb-e0154de207ba-operator-scripts\") pod \"fe096774-7016-4037-93fb-e0154de207ba\" (UID: \"fe096774-7016-4037-93fb-e0154de207ba\") " Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.192911 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4hvf\" (UniqueName: \"kubernetes.io/projected/d5696d56-384e-47a4-bff3-cc4a07264817-kube-api-access-q4hvf\") pod \"d5696d56-384e-47a4-bff3-cc4a07264817\" (UID: \"d5696d56-384e-47a4-bff3-cc4a07264817\") " Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.193016 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5696d56-384e-47a4-bff3-cc4a07264817-operator-scripts\") pod \"d5696d56-384e-47a4-bff3-cc4a07264817\" (UID: \"d5696d56-384e-47a4-bff3-cc4a07264817\") " Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.193084 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzjs9\" (UniqueName: \"kubernetes.io/projected/fe096774-7016-4037-93fb-e0154de207ba-kube-api-access-fzjs9\") pod \"fe096774-7016-4037-93fb-e0154de207ba\" (UID: \"fe096774-7016-4037-93fb-e0154de207ba\") " Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.193434 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe096774-7016-4037-93fb-e0154de207ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe096774-7016-4037-93fb-e0154de207ba" (UID: "fe096774-7016-4037-93fb-e0154de207ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.193803 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5696d56-384e-47a4-bff3-cc4a07264817-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5696d56-384e-47a4-bff3-cc4a07264817" (UID: "d5696d56-384e-47a4-bff3-cc4a07264817"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.194400 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef240be-40f4-4734-81bb-95b0b99a83b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.194738 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lswl6\" (UniqueName: \"kubernetes.io/projected/7ef240be-40f4-4734-81bb-95b0b99a83b7-kube-api-access-lswl6\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.196412 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5696d56-384e-47a4-bff3-cc4a07264817-kube-api-access-q4hvf" (OuterVolumeSpecName: "kube-api-access-q4hvf") pod "d5696d56-384e-47a4-bff3-cc4a07264817" (UID: "d5696d56-384e-47a4-bff3-cc4a07264817"). InnerVolumeSpecName "kube-api-access-q4hvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.197378 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe096774-7016-4037-93fb-e0154de207ba-kube-api-access-fzjs9" (OuterVolumeSpecName: "kube-api-access-fzjs9") pod "fe096774-7016-4037-93fb-e0154de207ba" (UID: "fe096774-7016-4037-93fb-e0154de207ba"). InnerVolumeSpecName "kube-api-access-fzjs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.295861 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c05a8ea6-88b3-4771-91db-da109123131d-operator-scripts\") pod \"c05a8ea6-88b3-4771-91db-da109123131d\" (UID: \"c05a8ea6-88b3-4771-91db-da109123131d\") " Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.296074 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4gfk\" (UniqueName: \"kubernetes.io/projected/c05a8ea6-88b3-4771-91db-da109123131d-kube-api-access-k4gfk\") pod \"c05a8ea6-88b3-4771-91db-da109123131d\" (UID: \"c05a8ea6-88b3-4771-91db-da109123131d\") " Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.296402 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c05a8ea6-88b3-4771-91db-da109123131d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c05a8ea6-88b3-4771-91db-da109123131d" (UID: "c05a8ea6-88b3-4771-91db-da109123131d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.297085 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5696d56-384e-47a4-bff3-cc4a07264817-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.297108 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzjs9\" (UniqueName: \"kubernetes.io/projected/fe096774-7016-4037-93fb-e0154de207ba-kube-api-access-fzjs9\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.297122 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c05a8ea6-88b3-4771-91db-da109123131d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.297134 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe096774-7016-4037-93fb-e0154de207ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.297145 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4hvf\" (UniqueName: \"kubernetes.io/projected/d5696d56-384e-47a4-bff3-cc4a07264817-kube-api-access-q4hvf\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.299222 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05a8ea6-88b3-4771-91db-da109123131d-kube-api-access-k4gfk" (OuterVolumeSpecName: "kube-api-access-k4gfk") pod "c05a8ea6-88b3-4771-91db-da109123131d" (UID: "c05a8ea6-88b3-4771-91db-da109123131d"). InnerVolumeSpecName "kube-api-access-k4gfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:29 crc kubenswrapper[4996]: I0228 09:20:29.398274 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4gfk\" (UniqueName: \"kubernetes.io/projected/c05a8ea6-88b3-4771-91db-da109123131d-kube-api-access-k4gfk\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:30 crc kubenswrapper[4996]: I0228 09:20:30.078044 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-247f-account-create-update-spp2j" Feb 28 09:20:30 crc kubenswrapper[4996]: I0228 09:20:30.079434 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wxpd2" event={"ID":"2620062d-d286-4cae-b123-b53cf5c9f71a","Type":"ContainerStarted","Data":"4d477925c3d293ff1c7a2512303a1d3fce654d456ea197429f1538bdb124f868"} Feb 28 09:20:30 crc kubenswrapper[4996]: I0228 09:20:30.080601 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-m6m58" Feb 28 09:20:30 crc kubenswrapper[4996]: I0228 09:20:30.081136 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-bgwbh" Feb 28 09:20:30 crc kubenswrapper[4996]: I0228 09:20:30.110434 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-wxpd2" podStartSLOduration=2.51845824 podStartE2EDuration="8.11040735s" podCreationTimestamp="2026-02-28 09:20:22 +0000 UTC" firstStartedPulling="2026-02-28 09:20:23.401916898 +0000 UTC m=+1187.092719709" lastFinishedPulling="2026-02-28 09:20:28.993865998 +0000 UTC m=+1192.684668819" observedRunningTime="2026-02-28 09:20:30.100892611 +0000 UTC m=+1193.791695422" watchObservedRunningTime="2026-02-28 09:20:30.11040735 +0000 UTC m=+1193.801210181" Feb 28 09:20:32 crc kubenswrapper[4996]: I0228 09:20:32.096381 4996 generic.go:334] "Generic (PLEG): container finished" podID="2620062d-d286-4cae-b123-b53cf5c9f71a" containerID="4d477925c3d293ff1c7a2512303a1d3fce654d456ea197429f1538bdb124f868" exitCode=0 Feb 28 09:20:32 crc kubenswrapper[4996]: I0228 09:20:32.096524 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wxpd2" event={"ID":"2620062d-d286-4cae-b123-b53cf5c9f71a","Type":"ContainerDied","Data":"4d477925c3d293ff1c7a2512303a1d3fce654d456ea197429f1538bdb124f868"} Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.474249 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.660991 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-config-data\") pod \"2620062d-d286-4cae-b123-b53cf5c9f71a\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.661304 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-combined-ca-bundle\") pod \"2620062d-d286-4cae-b123-b53cf5c9f71a\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.661424 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv8fr\" (UniqueName: \"kubernetes.io/projected/2620062d-d286-4cae-b123-b53cf5c9f71a-kube-api-access-kv8fr\") pod \"2620062d-d286-4cae-b123-b53cf5c9f71a\" (UID: \"2620062d-d286-4cae-b123-b53cf5c9f71a\") " Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.667379 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2620062d-d286-4cae-b123-b53cf5c9f71a-kube-api-access-kv8fr" (OuterVolumeSpecName: "kube-api-access-kv8fr") pod "2620062d-d286-4cae-b123-b53cf5c9f71a" (UID: "2620062d-d286-4cae-b123-b53cf5c9f71a"). InnerVolumeSpecName "kube-api-access-kv8fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.707767 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-config-data" (OuterVolumeSpecName: "config-data") pod "2620062d-d286-4cae-b123-b53cf5c9f71a" (UID: "2620062d-d286-4cae-b123-b53cf5c9f71a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.708032 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2620062d-d286-4cae-b123-b53cf5c9f71a" (UID: "2620062d-d286-4cae-b123-b53cf5c9f71a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.762659 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv8fr\" (UniqueName: \"kubernetes.io/projected/2620062d-d286-4cae-b123-b53cf5c9f71a-kube-api-access-kv8fr\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.762688 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:33 crc kubenswrapper[4996]: I0228 09:20:33.762699 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2620062d-d286-4cae-b123-b53cf5c9f71a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.112138 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-wxpd2" event={"ID":"2620062d-d286-4cae-b123-b53cf5c9f71a","Type":"ContainerDied","Data":"74dfe50494261d69680a9584a7fb4435640c488a51dd9965c0a7194b79b6405f"} Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.112183 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74dfe50494261d69680a9584a7fb4435640c488a51dd9965c0a7194b79b6405f" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.112194 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-wxpd2" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.745823 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-xx49x"] Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746436 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" containerName="init" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746454 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" containerName="init" Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746472 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" containerName="dnsmasq-dns" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746479 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" containerName="dnsmasq-dns" Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746488 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7daa6505-5dd3-48bd-bba3-18c707ea38ed" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746496 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7daa6505-5dd3-48bd-bba3-18c707ea38ed" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746517 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef240be-40f4-4734-81bb-95b0b99a83b7" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746525 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef240be-40f4-4734-81bb-95b0b99a83b7" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746537 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651f6398-d609-4a58-9b97-4bff8aff24cd" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746544 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="651f6398-d609-4a58-9b97-4bff8aff24cd" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746554 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c05a8ea6-88b3-4771-91db-da109123131d" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746561 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c05a8ea6-88b3-4771-91db-da109123131d" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746578 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5696d56-384e-47a4-bff3-cc4a07264817" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746586 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5696d56-384e-47a4-bff3-cc4a07264817" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746603 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2620062d-d286-4cae-b123-b53cf5c9f71a" containerName="keystone-db-sync" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746611 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2620062d-d286-4cae-b123-b53cf5c9f71a" containerName="keystone-db-sync" Feb 28 09:20:34 crc kubenswrapper[4996]: E0228 09:20:34.746619 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe096774-7016-4037-93fb-e0154de207ba" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746626 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe096774-7016-4037-93fb-e0154de207ba" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746917 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7daa6505-5dd3-48bd-bba3-18c707ea38ed" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746956 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef240be-40f4-4734-81bb-95b0b99a83b7" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746963 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="2620062d-d286-4cae-b123-b53cf5c9f71a" containerName="keystone-db-sync" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746973 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe096774-7016-4037-93fb-e0154de207ba" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746982 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="651f6398-d609-4a58-9b97-4bff8aff24cd" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.746991 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05a8ea6-88b3-4771-91db-da109123131d" containerName="mariadb-account-create-update" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.747001 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5696d56-384e-47a4-bff3-cc4a07264817" containerName="mariadb-database-create" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.747033 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2328d30-7bd5-4fc4-b7d7-cad17b8d3eff" containerName="dnsmasq-dns" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.748691 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.762948 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-xx49x"] Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.785302 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.785428 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmxq\" (UniqueName: \"kubernetes.io/projected/cf05cde9-be3f-43b2-bb36-735310be0c3a-kube-api-access-nvmxq\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.785491 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-config\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.785525 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.785593 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-dns-svc\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.796188 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-k6pkt"] Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.797272 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.800363 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.800423 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.805783 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.806388 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mxpf8" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.806510 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.814081 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k6pkt"] Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.890883 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmxq\" (UniqueName: \"kubernetes.io/projected/cf05cde9-be3f-43b2-bb36-735310be0c3a-kube-api-access-nvmxq\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.890982 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-config\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.891049 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.891085 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-dns-svc\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.891143 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.892292 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.892957 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-config\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.893652 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.894862 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-dns-svc\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.937364 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmxq\" (UniqueName: \"kubernetes.io/projected/cf05cde9-be3f-43b2-bb36-735310be0c3a-kube-api-access-nvmxq\") pod \"dnsmasq-dns-6546db6db7-xx49x\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.967453 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-kplwp"] Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.968428 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.975304 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8nlr8" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.975455 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.975507 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.980766 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fd5ff4cc-tmjhp"] Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.981916 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.983343 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.984535 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.984554 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2k7xw" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.985227 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.993835 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kplwp"] Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.997852 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-combined-ca-bundle\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.999683 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-scripts\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.999816 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-fernet-keys\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.999904 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-combined-ca-bundle\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:34 crc kubenswrapper[4996]: I0228 09:20:34.999982 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-scripts\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000073 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tgfv\" (UniqueName: \"kubernetes.io/projected/db0401da-7bc1-4203-bdfb-2a06deade35b-kube-api-access-7tgfv\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000161 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db0401da-7bc1-4203-bdfb-2a06deade35b-etc-machine-id\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000228 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzzs\" (UniqueName: \"kubernetes.io/projected/8cccb166-daa1-4c11-ba1c-a36d07cf2772-kube-api-access-5wzzs\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000287 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-config-data\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000372 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-config-data\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000446 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-credential-keys\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000544 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-config-data\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000620 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmf2p\" (UniqueName: \"kubernetes.io/projected/142b4341-66d8-4383-b848-f2159dcefffe-kube-api-access-pmf2p\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000682 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cccb166-daa1-4c11-ba1c-a36d07cf2772-logs\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000744 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cccb166-daa1-4c11-ba1c-a36d07cf2772-horizon-secret-key\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000814 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-db-sync-config-data\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.000907 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-scripts\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.017740 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fd5ff4cc-tmjhp"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.071823 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.098842 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.100466 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104479 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-config-data\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104529 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-credential-keys\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104571 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-config-data\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104595 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmf2p\" (UniqueName: \"kubernetes.io/projected/142b4341-66d8-4383-b848-f2159dcefffe-kube-api-access-pmf2p\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104615 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cccb166-daa1-4c11-ba1c-a36d07cf2772-logs\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104632 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cccb166-daa1-4c11-ba1c-a36d07cf2772-horizon-secret-key\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104663 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-db-sync-config-data\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104694 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-scripts\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104720 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-combined-ca-bundle\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104734 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-scripts\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104767 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-fernet-keys\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104790 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-combined-ca-bundle\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104806 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-scripts\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104825 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tgfv\" (UniqueName: \"kubernetes.io/projected/db0401da-7bc1-4203-bdfb-2a06deade35b-kube-api-access-7tgfv\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104850 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db0401da-7bc1-4203-bdfb-2a06deade35b-etc-machine-id\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104866 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzzs\" (UniqueName: \"kubernetes.io/projected/8cccb166-daa1-4c11-ba1c-a36d07cf2772-kube-api-access-5wzzs\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.104884 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-config-data\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.105619 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-scripts\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.106091 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-config-data\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.106229 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.106813 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db0401da-7bc1-4203-bdfb-2a06deade35b-etc-machine-id\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.113335 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.113813 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cccb166-daa1-4c11-ba1c-a36d07cf2772-logs\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.122555 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-config-data\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.125423 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-scripts\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.127745 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cccb166-daa1-4c11-ba1c-a36d07cf2772-horizon-secret-key\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.128349 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-combined-ca-bundle\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.128990 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-config-data\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.130181 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-db-sync-config-data\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.132448 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-credential-keys\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.143845 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-scripts\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.144057 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-fernet-keys\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.161973 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-combined-ca-bundle\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.217635 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmf2p\" (UniqueName: \"kubernetes.io/projected/142b4341-66d8-4383-b848-f2159dcefffe-kube-api-access-pmf2p\") pod \"keystone-bootstrap-k6pkt\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.219974 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tgfv\" (UniqueName: \"kubernetes.io/projected/db0401da-7bc1-4203-bdfb-2a06deade35b-kube-api-access-7tgfv\") pod \"cinder-db-sync-kplwp\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.234872 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzzs\" (UniqueName: \"kubernetes.io/projected/8cccb166-daa1-4c11-ba1c-a36d07cf2772-kube-api-access-5wzzs\") pod \"horizon-5fd5ff4cc-tmjhp\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.270132 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.299640 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kplwp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.355492 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.355741 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.355764 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq84b\" (UniqueName: \"kubernetes.io/projected/1f109701-52a5-4a26-ae21-415ebc0d21ff-kube-api-access-xq84b\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.355801 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-log-httpd\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.355822 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-config-data\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.355844 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-scripts\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.355859 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-run-httpd\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.356119 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.358699 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2fdzl"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.359886 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.362102 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qhlll" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.362728 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.370665 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cdd86cc49-cf96p"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.386134 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-nmp42"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.387121 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.387168 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.394070 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2fdzl"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.401506 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.401536 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gqbdj" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.402108 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.413913 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cdd86cc49-cf96p"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.432319 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.440082 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nmp42"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.459844 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.459920 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.459948 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq84b\" (UniqueName: \"kubernetes.io/projected/1f109701-52a5-4a26-ae21-415ebc0d21ff-kube-api-access-xq84b\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.460019 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-log-httpd\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.460057 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-config-data\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.460091 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-scripts\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.460117 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-run-httpd\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.466516 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-log-httpd\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.475930 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-run-httpd\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.476156 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-config-data\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.476716 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.480151 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq84b\" (UniqueName: \"kubernetes.io/projected/1f109701-52a5-4a26-ae21-415ebc0d21ff-kube-api-access-xq84b\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.480173 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-xx49x"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.482658 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.482897 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-scripts\") pod \"ceilometer-0\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.525063 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vtjbq"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.528956 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564035 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-combined-ca-bundle\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564086 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fcb1e96-955c-4fa4-942b-13a451a2d750-horizon-secret-key\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564122 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fcb1e96-955c-4fa4-942b-13a451a2d750-logs\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564165 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-config\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564190 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nld7\" (UniqueName: \"kubernetes.io/projected/5fcb1e96-955c-4fa4-942b-13a451a2d750-kube-api-access-9nld7\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564216 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-config-data\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564239 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-combined-ca-bundle\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564271 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8vl\" (UniqueName: \"kubernetes.io/projected/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-kube-api-access-rr8vl\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564308 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5n8\" (UniqueName: \"kubernetes.io/projected/531cd3d1-8618-42d1-88a1-b23b8ca9be62-kube-api-access-wh5n8\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564375 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vtjbq"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564466 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-db-sync-config-data\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.564512 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-scripts\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.582419 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.611914 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-j2wgr"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.612971 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.617176 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.617310 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cbv48" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.617816 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.624074 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j2wgr"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666049 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fcb1e96-955c-4fa4-942b-13a451a2d750-logs\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666340 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666363 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-config\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666390 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nld7\" (UniqueName: \"kubernetes.io/projected/5fcb1e96-955c-4fa4-942b-13a451a2d750-kube-api-access-9nld7\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666413 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-config-data\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666430 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-combined-ca-bundle\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666447 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-config\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666463 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4sr\" (UniqueName: \"kubernetes.io/projected/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-kube-api-access-zs4sr\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666494 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666512 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr8vl\" (UniqueName: \"kubernetes.io/projected/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-kube-api-access-rr8vl\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666540 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh5n8\" (UniqueName: \"kubernetes.io/projected/531cd3d1-8618-42d1-88a1-b23b8ca9be62-kube-api-access-wh5n8\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666562 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666578 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-db-sync-config-data\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666601 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-scripts\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666655 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-combined-ca-bundle\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.666677 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fcb1e96-955c-4fa4-942b-13a451a2d750-horizon-secret-key\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.669546 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fcb1e96-955c-4fa4-942b-13a451a2d750-horizon-secret-key\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.669770 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fcb1e96-955c-4fa4-942b-13a451a2d750-logs\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.676109 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-config\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.680671 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-scripts\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.681712 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-config-data\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.691429 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-combined-ca-bundle\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.701969 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh5n8\" (UniqueName: \"kubernetes.io/projected/531cd3d1-8618-42d1-88a1-b23b8ca9be62-kube-api-access-wh5n8\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.708486 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-db-sync-config-data\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.709503 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-combined-ca-bundle\") pod \"barbican-db-sync-2fdzl\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.721879 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr8vl\" (UniqueName: \"kubernetes.io/projected/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-kube-api-access-rr8vl\") pod \"neutron-db-sync-nmp42\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.722720 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nld7\" (UniqueName: \"kubernetes.io/projected/5fcb1e96-955c-4fa4-942b-13a451a2d750-kube-api-access-9nld7\") pod \"horizon-cdd86cc49-cf96p\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.766667 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.796760 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-combined-ca-bundle\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.796815 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6s7p\" (UniqueName: \"kubernetes.io/projected/05164ff9-4bc2-433a-881c-5046c3352637-kube-api-access-k6s7p\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.796854 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05164ff9-4bc2-433a-881c-5046c3352637-logs\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.796901 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-config-data\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.797031 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.797082 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-config\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.797104 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs4sr\" (UniqueName: \"kubernetes.io/projected/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-kube-api-access-zs4sr\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.797140 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.797194 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.797240 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-scripts\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.799424 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.799973 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.800680 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.801021 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-config\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.807078 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nmp42" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.815658 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs4sr\" (UniqueName: \"kubernetes.io/projected/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-kube-api-access-zs4sr\") pod \"dnsmasq-dns-7987f74bbc-vtjbq\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.836169 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.891054 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.898619 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-scripts\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.898676 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-combined-ca-bundle\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.898704 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6s7p\" (UniqueName: \"kubernetes.io/projected/05164ff9-4bc2-433a-881c-5046c3352637-kube-api-access-k6s7p\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.898729 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05164ff9-4bc2-433a-881c-5046c3352637-logs\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.898757 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-config-data\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.903500 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-config-data\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.903938 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05164ff9-4bc2-433a-881c-5046c3352637-logs\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.905091 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-scripts\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.906859 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-combined-ca-bundle\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.928074 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6s7p\" (UniqueName: \"kubernetes.io/projected/05164ff9-4bc2-433a-881c-5046c3352637-kube-api-access-k6s7p\") pod \"placement-db-sync-j2wgr\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.940281 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-kplwp"] Feb 28 09:20:35 crc kubenswrapper[4996]: I0228 09:20:35.962492 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2wgr" Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.001906 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-xx49x"] Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.238371 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kplwp" event={"ID":"db0401da-7bc1-4203-bdfb-2a06deade35b","Type":"ContainerStarted","Data":"c23bcf07c830d8e6f1ac79f0f7649c2aed8523686f2314fb930edff2eaa6240e"} Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.241981 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-xx49x" event={"ID":"cf05cde9-be3f-43b2-bb36-735310be0c3a","Type":"ContainerStarted","Data":"ced31897a4d6c0a998e36960063f2e9b1bdab29cb6b75ccaabf001d3c42ebb7a"} Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.250403 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fd5ff4cc-tmjhp"] Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.366268 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-k6pkt"] Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.487072 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.628083 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-nmp42"] Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.635516 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2fdzl"] Feb 28 09:20:36 crc kubenswrapper[4996]: W0228 09:20:36.645264 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod531cd3d1_8618_42d1_88a1_b23b8ca9be62.slice/crio-0d967454fa20139efabfd717f20901816644c923bc56397507673464440d12b9 WatchSource:0}: Error finding container 0d967454fa20139efabfd717f20901816644c923bc56397507673464440d12b9: Status 404 returned error can't find the container with id 0d967454fa20139efabfd717f20901816644c923bc56397507673464440d12b9 Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.772600 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cdd86cc49-cf96p"] Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.788197 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-j2wgr"] Feb 28 09:20:36 crc kubenswrapper[4996]: W0228 09:20:36.799871 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05164ff9_4bc2_433a_881c_5046c3352637.slice/crio-b549ea293af69e7dbeaa1beac1e2fd2b11d040e63d4b93f0f4737cb457d8524d WatchSource:0}: Error finding container b549ea293af69e7dbeaa1beac1e2fd2b11d040e63d4b93f0f4737cb457d8524d: Status 404 returned error can't find the container with id b549ea293af69e7dbeaa1beac1e2fd2b11d040e63d4b93f0f4737cb457d8524d Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.803324 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vtjbq"] Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.945349 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fd5ff4cc-tmjhp"] Feb 28 09:20:36 crc kubenswrapper[4996]: I0228 09:20:36.990485 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.019252 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64c897fd85-x7qwt"] Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.080313 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.092338 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-horizon-secret-key\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.092488 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-logs\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.092599 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-config-data\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.092635 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-scripts\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.092656 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pbsw\" (UniqueName: \"kubernetes.io/projected/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-kube-api-access-4pbsw\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.155731 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64c897fd85-x7qwt"] Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.194018 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-config-data\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.194731 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-scripts\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.197291 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-config-data\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.197399 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pbsw\" (UniqueName: \"kubernetes.io/projected/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-kube-api-access-4pbsw\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.197523 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-horizon-secret-key\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.197639 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-logs\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.198057 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-logs\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.198553 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-scripts\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.220919 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-horizon-secret-key\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.244665 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pbsw\" (UniqueName: \"kubernetes.io/projected/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-kube-api-access-4pbsw\") pod \"horizon-64c897fd85-x7qwt\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.297276 4996 generic.go:334] "Generic (PLEG): container finished" podID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" containerID="8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda" exitCode=0 Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.297376 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" event={"ID":"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2","Type":"ContainerDied","Data":"8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.297429 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" event={"ID":"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2","Type":"ContainerStarted","Data":"07c87d2d327c5f118492178d2db11586d56b896ee7a4b414d5009823765936b3"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.305828 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k6pkt" event={"ID":"142b4341-66d8-4383-b848-f2159dcefffe","Type":"ContainerStarted","Data":"c5b612763df809a963942ea8b68f42896f0908c5065bea965467f097ca17d47b"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.305873 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k6pkt" event={"ID":"142b4341-66d8-4383-b848-f2159dcefffe","Type":"ContainerStarted","Data":"5a0db0a0db37f78b637a2ebec73a9079a27148cc49a1345631fbf8b426774a04"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.310692 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2fdzl" event={"ID":"531cd3d1-8618-42d1-88a1-b23b8ca9be62","Type":"ContainerStarted","Data":"0d967454fa20139efabfd717f20901816644c923bc56397507673464440d12b9"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.326892 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2wgr" event={"ID":"05164ff9-4bc2-433a-881c-5046c3352637","Type":"ContainerStarted","Data":"b549ea293af69e7dbeaa1beac1e2fd2b11d040e63d4b93f0f4737cb457d8524d"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.351585 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerStarted","Data":"9f32cb5e690b997c7348ce1d5ed24cdb4773ddf1478c41290e963673eff60b13"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.354250 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fd5ff4cc-tmjhp" event={"ID":"8cccb166-daa1-4c11-ba1c-a36d07cf2772","Type":"ContainerStarted","Data":"eb793555162ce2b7ed4e877a633b6599748dcd1ef767a96d9c61a5e46062509a"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.360623 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdd86cc49-cf96p" event={"ID":"5fcb1e96-955c-4fa4-942b-13a451a2d750","Type":"ContainerStarted","Data":"e717440553f4cca809d70665cebbddb2eb97c9e6cac2f9c7ab752a481703aa93"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.367551 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nmp42" event={"ID":"3113a2ed-c04d-4f18-9f1d-a47482aa76fa","Type":"ContainerStarted","Data":"2baeee7b0efc41ff490010fdf5d143bc67ba41a430fb206f55c62d62da225a4e"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.367606 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nmp42" event={"ID":"3113a2ed-c04d-4f18-9f1d-a47482aa76fa","Type":"ContainerStarted","Data":"9791d4819e47f9bdd941af74f85ad5ee44c5450d88a6c53f2ff13344bd4bf913"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.373556 4996 generic.go:334] "Generic (PLEG): container finished" podID="cf05cde9-be3f-43b2-bb36-735310be0c3a" containerID="c0b72b7ec8d5c52131b385077fc0db3642932a2111cbf1e24304eba129887480" exitCode=0 Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.373616 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-xx49x" event={"ID":"cf05cde9-be3f-43b2-bb36-735310be0c3a","Type":"ContainerDied","Data":"c0b72b7ec8d5c52131b385077fc0db3642932a2111cbf1e24304eba129887480"} Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.414018 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-nmp42" podStartSLOduration=2.41398279 podStartE2EDuration="2.41398279s" podCreationTimestamp="2026-02-28 09:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:20:37.39947829 +0000 UTC m=+1201.090281121" watchObservedRunningTime="2026-02-28 09:20:37.41398279 +0000 UTC m=+1201.104785601" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.427630 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.441563 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-k6pkt" podStartSLOduration=3.441544104 podStartE2EDuration="3.441544104s" podCreationTimestamp="2026-02-28 09:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:20:37.419957044 +0000 UTC m=+1201.110759855" watchObservedRunningTime="2026-02-28 09:20:37.441544104 +0000 UTC m=+1201.132346925" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.737723 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.818347 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-config\") pod \"cf05cde9-be3f-43b2-bb36-735310be0c3a\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.818395 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-nb\") pod \"cf05cde9-be3f-43b2-bb36-735310be0c3a\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.818488 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-dns-svc\") pod \"cf05cde9-be3f-43b2-bb36-735310be0c3a\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.818551 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-sb\") pod \"cf05cde9-be3f-43b2-bb36-735310be0c3a\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.818695 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmxq\" (UniqueName: \"kubernetes.io/projected/cf05cde9-be3f-43b2-bb36-735310be0c3a-kube-api-access-nvmxq\") pod \"cf05cde9-be3f-43b2-bb36-735310be0c3a\" (UID: \"cf05cde9-be3f-43b2-bb36-735310be0c3a\") " Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.822884 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf05cde9-be3f-43b2-bb36-735310be0c3a-kube-api-access-nvmxq" (OuterVolumeSpecName: "kube-api-access-nvmxq") pod "cf05cde9-be3f-43b2-bb36-735310be0c3a" (UID: "cf05cde9-be3f-43b2-bb36-735310be0c3a"). InnerVolumeSpecName "kube-api-access-nvmxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.837726 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf05cde9-be3f-43b2-bb36-735310be0c3a" (UID: "cf05cde9-be3f-43b2-bb36-735310be0c3a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.846072 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf05cde9-be3f-43b2-bb36-735310be0c3a" (UID: "cf05cde9-be3f-43b2-bb36-735310be0c3a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.852953 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-config" (OuterVolumeSpecName: "config") pod "cf05cde9-be3f-43b2-bb36-735310be0c3a" (UID: "cf05cde9-be3f-43b2-bb36-735310be0c3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.865798 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf05cde9-be3f-43b2-bb36-735310be0c3a" (UID: "cf05cde9-be3f-43b2-bb36-735310be0c3a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.920646 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.920686 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.920701 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvmxq\" (UniqueName: \"kubernetes.io/projected/cf05cde9-be3f-43b2-bb36-735310be0c3a-kube-api-access-nvmxq\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.920717 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:37 crc kubenswrapper[4996]: I0228 09:20:37.920727 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf05cde9-be3f-43b2-bb36-735310be0c3a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.021665 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64c897fd85-x7qwt"] Feb 28 09:20:38 crc kubenswrapper[4996]: W0228 09:20:38.024714 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d51688d_0c98_4246_925e_ad1d4f9ef3d5.slice/crio-4ffc81dfb987da7c5c93157ed5a8eedef47131eec2b71ea7d26a35a38e8d8475 WatchSource:0}: Error finding container 4ffc81dfb987da7c5c93157ed5a8eedef47131eec2b71ea7d26a35a38e8d8475: Status 404 returned error can't find the container with id 4ffc81dfb987da7c5c93157ed5a8eedef47131eec2b71ea7d26a35a38e8d8475 Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.392601 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" event={"ID":"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2","Type":"ContainerStarted","Data":"c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c"} Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.392699 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.394455 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-xx49x" event={"ID":"cf05cde9-be3f-43b2-bb36-735310be0c3a","Type":"ContainerDied","Data":"ced31897a4d6c0a998e36960063f2e9b1bdab29cb6b75ccaabf001d3c42ebb7a"} Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.394536 4996 scope.go:117] "RemoveContainer" containerID="c0b72b7ec8d5c52131b385077fc0db3642932a2111cbf1e24304eba129887480" Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.394961 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-xx49x" Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.400555 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64c897fd85-x7qwt" event={"ID":"4d51688d-0c98-4246-925e-ad1d4f9ef3d5","Type":"ContainerStarted","Data":"4ffc81dfb987da7c5c93157ed5a8eedef47131eec2b71ea7d26a35a38e8d8475"} Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.492525 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" podStartSLOduration=3.492504725 podStartE2EDuration="3.492504725s" podCreationTimestamp="2026-02-28 09:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:20:38.421729948 +0000 UTC m=+1202.112532759" watchObservedRunningTime="2026-02-28 09:20:38.492504725 +0000 UTC m=+1202.183307536" Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.529832 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-xx49x"] Feb 28 09:20:38 crc kubenswrapper[4996]: I0228 09:20:38.541874 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-xx49x"] Feb 28 09:20:39 crc kubenswrapper[4996]: I0228 09:20:39.050016 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf05cde9-be3f-43b2-bb36-735310be0c3a" path="/var/lib/kubelet/pods/cf05cde9-be3f-43b2-bb36-735310be0c3a/volumes" Feb 28 09:20:39 crc kubenswrapper[4996]: I0228 09:20:39.624953 4996 scope.go:117] "RemoveContainer" containerID="239e6b119bd0f7fa2300d4a7813fb51c9e2196a7432241432aa8841af3ce4b07" Feb 28 09:20:41 crc kubenswrapper[4996]: I0228 09:20:41.440587 4996 generic.go:334] "Generic (PLEG): container finished" podID="142b4341-66d8-4383-b848-f2159dcefffe" containerID="c5b612763df809a963942ea8b68f42896f0908c5065bea965467f097ca17d47b" exitCode=0 Feb 28 09:20:41 crc kubenswrapper[4996]: I0228 09:20:41.440911 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k6pkt" event={"ID":"142b4341-66d8-4383-b848-f2159dcefffe","Type":"ContainerDied","Data":"c5b612763df809a963942ea8b68f42896f0908c5065bea965467f097ca17d47b"} Feb 28 09:20:42 crc kubenswrapper[4996]: I0228 09:20:42.248805 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:20:42 crc kubenswrapper[4996]: I0228 09:20:42.248860 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:20:42 crc kubenswrapper[4996]: I0228 09:20:42.248904 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:20:42 crc kubenswrapper[4996]: I0228 09:20:42.249607 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4772ad3990edf9d0c6d563de92a45c70bf5a82075c0fa4fd5de03b133e39b174"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:20:42 crc kubenswrapper[4996]: I0228 09:20:42.249712 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://4772ad3990edf9d0c6d563de92a45c70bf5a82075c0fa4fd5de03b133e39b174" gracePeriod=600 Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.404232 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cdd86cc49-cf96p"] Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.447474 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55cc9dfcd4-m6gv8"] Feb 28 09:20:43 crc kubenswrapper[4996]: E0228 09:20:43.447906 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf05cde9-be3f-43b2-bb36-735310be0c3a" containerName="init" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.447933 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf05cde9-be3f-43b2-bb36-735310be0c3a" containerName="init" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.448168 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf05cde9-be3f-43b2-bb36-735310be0c3a" containerName="init" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.449277 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.452089 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.473600 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="4772ad3990edf9d0c6d563de92a45c70bf5a82075c0fa4fd5de03b133e39b174" exitCode=0 Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.473648 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"4772ad3990edf9d0c6d563de92a45c70bf5a82075c0fa4fd5de03b133e39b174"} Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.473703 4996 scope.go:117] "RemoveContainer" containerID="02e43fcaf1e32104b093babde57c895a435eb8e935013328e1ffee5be20b3dec" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.479045 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55cc9dfcd4-m6gv8"] Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.528287 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-combined-ca-bundle\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.528367 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-config-data\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.528417 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dcd95e8-c193-47ef-bc21-acabccfcff53-logs\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.528438 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m49v\" (UniqueName: \"kubernetes.io/projected/9dcd95e8-c193-47ef-bc21-acabccfcff53-kube-api-access-6m49v\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.528497 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-tls-certs\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.528519 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-secret-key\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.528532 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-scripts\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.556527 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64c897fd85-x7qwt"] Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.577277 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6ccc6bcbc4-2fmz9"] Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.578587 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.584394 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6ccc6bcbc4-2fmz9"] Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629712 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-config-data\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629754 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-horizon-tls-certs\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629799 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dcd95e8-c193-47ef-bc21-acabccfcff53-logs\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629817 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m49v\" (UniqueName: \"kubernetes.io/projected/9dcd95e8-c193-47ef-bc21-acabccfcff53-kube-api-access-6m49v\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629843 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-tls-certs\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629863 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-secret-key\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629880 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-scripts\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629900 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpshv\" (UniqueName: \"kubernetes.io/projected/b605afa6-a344-45f0-b62a-56f46b346c52-kube-api-access-tpshv\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629935 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b605afa6-a344-45f0-b62a-56f46b346c52-scripts\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629953 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b605afa6-a344-45f0-b62a-56f46b346c52-config-data\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629983 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-combined-ca-bundle\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.629998 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-combined-ca-bundle\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.630036 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-horizon-secret-key\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.630054 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b605afa6-a344-45f0-b62a-56f46b346c52-logs\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.630886 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-scripts\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.631051 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dcd95e8-c193-47ef-bc21-acabccfcff53-logs\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.632151 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-config-data\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.636381 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-secret-key\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.636494 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-tls-certs\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.640607 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-combined-ca-bundle\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.648274 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m49v\" (UniqueName: \"kubernetes.io/projected/9dcd95e8-c193-47ef-bc21-acabccfcff53-kube-api-access-6m49v\") pod \"horizon-55cc9dfcd4-m6gv8\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.731110 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-horizon-secret-key\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.731148 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b605afa6-a344-45f0-b62a-56f46b346c52-logs\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.731195 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-horizon-tls-certs\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.731259 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpshv\" (UniqueName: \"kubernetes.io/projected/b605afa6-a344-45f0-b62a-56f46b346c52-kube-api-access-tpshv\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.731296 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b605afa6-a344-45f0-b62a-56f46b346c52-scripts\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.731314 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b605afa6-a344-45f0-b62a-56f46b346c52-config-data\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.731337 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-combined-ca-bundle\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.731551 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b605afa6-a344-45f0-b62a-56f46b346c52-logs\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.732159 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b605afa6-a344-45f0-b62a-56f46b346c52-scripts\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.732785 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b605afa6-a344-45f0-b62a-56f46b346c52-config-data\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.736860 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-horizon-tls-certs\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.738559 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-combined-ca-bundle\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.745398 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b605afa6-a344-45f0-b62a-56f46b346c52-horizon-secret-key\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.749130 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpshv\" (UniqueName: \"kubernetes.io/projected/b605afa6-a344-45f0-b62a-56f46b346c52-kube-api-access-tpshv\") pod \"horizon-6ccc6bcbc4-2fmz9\" (UID: \"b605afa6-a344-45f0-b62a-56f46b346c52\") " pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.780069 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:20:43 crc kubenswrapper[4996]: I0228 09:20:43.905071 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:20:45 crc kubenswrapper[4996]: I0228 09:20:45.893541 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:20:45 crc kubenswrapper[4996]: I0228 09:20:45.976180 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4cm87"] Feb 28 09:20:45 crc kubenswrapper[4996]: I0228 09:20:45.976425 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="dnsmasq-dns" containerID="cri-o://f2b6b958a5535cc232df69e838ca026c8411fe969cc4f19388d1d7fd7f208fea" gracePeriod=10 Feb 28 09:20:46 crc kubenswrapper[4996]: I0228 09:20:46.512644 4996 generic.go:334] "Generic (PLEG): container finished" podID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerID="f2b6b958a5535cc232df69e838ca026c8411fe969cc4f19388d1d7fd7f208fea" exitCode=0 Feb 28 09:20:46 crc kubenswrapper[4996]: I0228 09:20:46.512685 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" event={"ID":"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1","Type":"ContainerDied","Data":"f2b6b958a5535cc232df69e838ca026c8411fe969cc4f19388d1d7fd7f208fea"} Feb 28 09:20:49 crc kubenswrapper[4996]: I0228 09:20:49.532983 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 28 09:20:51 crc kubenswrapper[4996]: E0228 09:20:51.466330 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 28 09:20:51 crc kubenswrapper[4996]: E0228 09:20:51.466493 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbch644h94h546h64ch55ch67dhfch7ch5d6h57ch5f6hbbh668h5d5h68dh5d7h57h57dh55fh5c9h6fh5dchdbh64h659h659h559h58ch694hbfhb8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5wzzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5fd5ff4cc-tmjhp_openstack(8cccb166-daa1-4c11-ba1c-a36d07cf2772): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:20:51 crc kubenswrapper[4996]: E0228 09:20:51.485063 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5fd5ff4cc-tmjhp" podUID="8cccb166-daa1-4c11-ba1c-a36d07cf2772" Feb 28 09:20:53 crc kubenswrapper[4996]: E0228 09:20:53.851239 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 28 09:20:53 crc kubenswrapper[4996]: E0228 09:20:53.852291 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc6h9ch55ch5d5h9ch5c5h56bh5f6hbbhfdh57bhcbh598h7fhb9hc4h696h5d9h56h655h55dh59bh5b5h689h5d6h666h5d7h68bh548h67fh596h685q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nld7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-cdd86cc49-cf96p_openstack(5fcb1e96-955c-4fa4-942b-13a451a2d750): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:20:53 crc kubenswrapper[4996]: E0228 09:20:53.854747 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-cdd86cc49-cf96p" podUID="5fcb1e96-955c-4fa4-942b-13a451a2d750" Feb 28 09:20:53 crc kubenswrapper[4996]: E0228 09:20:53.871915 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 28 09:20:53 crc kubenswrapper[4996]: E0228 09:20:53.872118 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbch78h649h57ch5d6h56ch656h5d4h66fh7bh99h59fh587h558h659h56dh5b9h545hffh675hc4h646h666h6dh65chd4h8bh599h56bhb9h58fh59dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pbsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-64c897fd85-x7qwt_openstack(4d51688d-0c98-4246-925e-ad1d4f9ef3d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:20:53 crc kubenswrapper[4996]: E0228 09:20:53.876212 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-64c897fd85-x7qwt" podUID="4d51688d-0c98-4246-925e-ad1d4f9ef3d5" Feb 28 09:20:54 crc kubenswrapper[4996]: I0228 09:20:54.533898 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 28 09:20:59 crc kubenswrapper[4996]: I0228 09:20:59.533645 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Feb 28 09:20:59 crc kubenswrapper[4996]: I0228 09:20:59.534472 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.080969 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.083536 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160100 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-credential-keys\") pod \"142b4341-66d8-4383-b848-f2159dcefffe\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160160 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-scripts\") pod \"142b4341-66d8-4383-b848-f2159dcefffe\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160247 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-scripts\") pod \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160292 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cccb166-daa1-4c11-ba1c-a36d07cf2772-horizon-secret-key\") pod \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160325 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmf2p\" (UniqueName: \"kubernetes.io/projected/142b4341-66d8-4383-b848-f2159dcefffe-kube-api-access-pmf2p\") pod \"142b4341-66d8-4383-b848-f2159dcefffe\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160353 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-config-data\") pod \"142b4341-66d8-4383-b848-f2159dcefffe\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160381 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cccb166-daa1-4c11-ba1c-a36d07cf2772-logs\") pod \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160414 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-fernet-keys\") pod \"142b4341-66d8-4383-b848-f2159dcefffe\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160471 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wzzs\" (UniqueName: \"kubernetes.io/projected/8cccb166-daa1-4c11-ba1c-a36d07cf2772-kube-api-access-5wzzs\") pod \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160491 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-combined-ca-bundle\") pod \"142b4341-66d8-4383-b848-f2159dcefffe\" (UID: \"142b4341-66d8-4383-b848-f2159dcefffe\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.160548 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-config-data\") pod \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\" (UID: \"8cccb166-daa1-4c11-ba1c-a36d07cf2772\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.162117 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-config-data" (OuterVolumeSpecName: "config-data") pod "8cccb166-daa1-4c11-ba1c-a36d07cf2772" (UID: "8cccb166-daa1-4c11-ba1c-a36d07cf2772"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.163190 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cccb166-daa1-4c11-ba1c-a36d07cf2772-logs" (OuterVolumeSpecName: "logs") pod "8cccb166-daa1-4c11-ba1c-a36d07cf2772" (UID: "8cccb166-daa1-4c11-ba1c-a36d07cf2772"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.164494 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-scripts" (OuterVolumeSpecName: "scripts") pod "8cccb166-daa1-4c11-ba1c-a36d07cf2772" (UID: "8cccb166-daa1-4c11-ba1c-a36d07cf2772"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.166064 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "142b4341-66d8-4383-b848-f2159dcefffe" (UID: "142b4341-66d8-4383-b848-f2159dcefffe"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.166731 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cccb166-daa1-4c11-ba1c-a36d07cf2772-kube-api-access-5wzzs" (OuterVolumeSpecName: "kube-api-access-5wzzs") pod "8cccb166-daa1-4c11-ba1c-a36d07cf2772" (UID: "8cccb166-daa1-4c11-ba1c-a36d07cf2772"). InnerVolumeSpecName "kube-api-access-5wzzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.167550 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-scripts" (OuterVolumeSpecName: "scripts") pod "142b4341-66d8-4383-b848-f2159dcefffe" (UID: "142b4341-66d8-4383-b848-f2159dcefffe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.168843 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "142b4341-66d8-4383-b848-f2159dcefffe" (UID: "142b4341-66d8-4383-b848-f2159dcefffe"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.169295 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cccb166-daa1-4c11-ba1c-a36d07cf2772-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8cccb166-daa1-4c11-ba1c-a36d07cf2772" (UID: "8cccb166-daa1-4c11-ba1c-a36d07cf2772"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.182983 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/142b4341-66d8-4383-b848-f2159dcefffe-kube-api-access-pmf2p" (OuterVolumeSpecName: "kube-api-access-pmf2p") pod "142b4341-66d8-4383-b848-f2159dcefffe" (UID: "142b4341-66d8-4383-b848-f2159dcefffe"). InnerVolumeSpecName "kube-api-access-pmf2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.205084 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "142b4341-66d8-4383-b848-f2159dcefffe" (UID: "142b4341-66d8-4383-b848-f2159dcefffe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.228374 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-config-data" (OuterVolumeSpecName: "config-data") pod "142b4341-66d8-4383-b848-f2159dcefffe" (UID: "142b4341-66d8-4383-b848-f2159dcefffe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261669 4996 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261710 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261724 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261736 4996 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8cccb166-daa1-4c11-ba1c-a36d07cf2772-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261748 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmf2p\" (UniqueName: \"kubernetes.io/projected/142b4341-66d8-4383-b848-f2159dcefffe-kube-api-access-pmf2p\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261761 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261773 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cccb166-daa1-4c11-ba1c-a36d07cf2772-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261785 4996 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261796 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wzzs\" (UniqueName: \"kubernetes.io/projected/8cccb166-daa1-4c11-ba1c-a36d07cf2772-kube-api-access-5wzzs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261807 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/142b4341-66d8-4383-b848-f2159dcefffe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.261818 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8cccb166-daa1-4c11-ba1c-a36d07cf2772-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: E0228 09:21:01.552427 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 28 09:21:01 crc kubenswrapper[4996]: E0228 09:21:01.552695 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh5n8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2fdzl_openstack(531cd3d1-8618-42d1-88a1-b23b8ca9be62): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:21:01 crc kubenswrapper[4996]: E0228 09:21:01.554080 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2fdzl" podUID="531cd3d1-8618-42d1-88a1-b23b8ca9be62" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.555176 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.563415 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.657239 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cdd86cc49-cf96p" event={"ID":"5fcb1e96-955c-4fa4-942b-13a451a2d750","Type":"ContainerDied","Data":"e717440553f4cca809d70665cebbddb2eb97c9e6cac2f9c7ab752a481703aa93"} Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.657369 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cdd86cc49-cf96p" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.659481 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-k6pkt" event={"ID":"142b4341-66d8-4383-b848-f2159dcefffe","Type":"ContainerDied","Data":"5a0db0a0db37f78b637a2ebec73a9079a27148cc49a1345631fbf8b426774a04"} Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.659519 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0db0a0db37f78b637a2ebec73a9079a27148cc49a1345631fbf8b426774a04" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.659533 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-k6pkt" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.662289 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64c897fd85-x7qwt" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.662837 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64c897fd85-x7qwt" event={"ID":"4d51688d-0c98-4246-925e-ad1d4f9ef3d5","Type":"ContainerDied","Data":"4ffc81dfb987da7c5c93157ed5a8eedef47131eec2b71ea7d26a35a38e8d8475"} Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.664185 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fd5ff4cc-tmjhp" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.665439 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fd5ff4cc-tmjhp" event={"ID":"8cccb166-daa1-4c11-ba1c-a36d07cf2772","Type":"ContainerDied","Data":"eb793555162ce2b7ed4e877a633b6599748dcd1ef767a96d9c61a5e46062509a"} Feb 28 09:21:01 crc kubenswrapper[4996]: E0228 09:21:01.666417 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-2fdzl" podUID="531cd3d1-8618-42d1-88a1-b23b8ca9be62" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.667759 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-scripts\") pod \"5fcb1e96-955c-4fa4-942b-13a451a2d750\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.667796 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fcb1e96-955c-4fa4-942b-13a451a2d750-horizon-secret-key\") pod \"5fcb1e96-955c-4fa4-942b-13a451a2d750\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.667825 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fcb1e96-955c-4fa4-942b-13a451a2d750-logs\") pod \"5fcb1e96-955c-4fa4-942b-13a451a2d750\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.667854 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-logs\") pod \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.667889 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-config-data\") pod \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.667960 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-horizon-secret-key\") pod \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.667994 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-config-data\") pod \"5fcb1e96-955c-4fa4-942b-13a451a2d750\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.668040 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pbsw\" (UniqueName: \"kubernetes.io/projected/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-kube-api-access-4pbsw\") pod \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.668073 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-scripts\") pod \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\" (UID: \"4d51688d-0c98-4246-925e-ad1d4f9ef3d5\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.668113 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nld7\" (UniqueName: \"kubernetes.io/projected/5fcb1e96-955c-4fa4-942b-13a451a2d750-kube-api-access-9nld7\") pod \"5fcb1e96-955c-4fa4-942b-13a451a2d750\" (UID: \"5fcb1e96-955c-4fa4-942b-13a451a2d750\") " Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.669350 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fcb1e96-955c-4fa4-942b-13a451a2d750-logs" (OuterVolumeSpecName: "logs") pod "5fcb1e96-955c-4fa4-942b-13a451a2d750" (UID: "5fcb1e96-955c-4fa4-942b-13a451a2d750"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.669567 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-logs" (OuterVolumeSpecName: "logs") pod "4d51688d-0c98-4246-925e-ad1d4f9ef3d5" (UID: "4d51688d-0c98-4246-925e-ad1d4f9ef3d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.670070 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-config-data" (OuterVolumeSpecName: "config-data") pod "4d51688d-0c98-4246-925e-ad1d4f9ef3d5" (UID: "4d51688d-0c98-4246-925e-ad1d4f9ef3d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.671204 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-scripts" (OuterVolumeSpecName: "scripts") pod "4d51688d-0c98-4246-925e-ad1d4f9ef3d5" (UID: "4d51688d-0c98-4246-925e-ad1d4f9ef3d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.671287 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-scripts" (OuterVolumeSpecName: "scripts") pod "5fcb1e96-955c-4fa4-942b-13a451a2d750" (UID: "5fcb1e96-955c-4fa4-942b-13a451a2d750"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.672103 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-config-data" (OuterVolumeSpecName: "config-data") pod "5fcb1e96-955c-4fa4-942b-13a451a2d750" (UID: "5fcb1e96-955c-4fa4-942b-13a451a2d750"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.673245 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4d51688d-0c98-4246-925e-ad1d4f9ef3d5" (UID: "4d51688d-0c98-4246-925e-ad1d4f9ef3d5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.674782 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcb1e96-955c-4fa4-942b-13a451a2d750-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5fcb1e96-955c-4fa4-942b-13a451a2d750" (UID: "5fcb1e96-955c-4fa4-942b-13a451a2d750"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.675242 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-kube-api-access-4pbsw" (OuterVolumeSpecName: "kube-api-access-4pbsw") pod "4d51688d-0c98-4246-925e-ad1d4f9ef3d5" (UID: "4d51688d-0c98-4246-925e-ad1d4f9ef3d5"). InnerVolumeSpecName "kube-api-access-4pbsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.675878 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcb1e96-955c-4fa4-942b-13a451a2d750-kube-api-access-9nld7" (OuterVolumeSpecName: "kube-api-access-9nld7") pod "5fcb1e96-955c-4fa4-942b-13a451a2d750" (UID: "5fcb1e96-955c-4fa4-942b-13a451a2d750"). InnerVolumeSpecName "kube-api-access-9nld7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.738105 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5fd5ff4cc-tmjhp"] Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.747891 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5fd5ff4cc-tmjhp"] Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770047 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nld7\" (UniqueName: \"kubernetes.io/projected/5fcb1e96-955c-4fa4-942b-13a451a2d750-kube-api-access-9nld7\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770079 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770111 4996 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5fcb1e96-955c-4fa4-942b-13a451a2d750-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770124 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fcb1e96-955c-4fa4-942b-13a451a2d750-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770135 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770148 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770160 4996 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770194 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5fcb1e96-955c-4fa4-942b-13a451a2d750-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770208 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pbsw\" (UniqueName: \"kubernetes.io/projected/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-kube-api-access-4pbsw\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:01 crc kubenswrapper[4996]: I0228 09:21:01.770217 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d51688d-0c98-4246-925e-ad1d4f9ef3d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.013230 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cdd86cc49-cf96p"] Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.019817 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cdd86cc49-cf96p"] Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.085284 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64c897fd85-x7qwt"] Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.093246 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64c897fd85-x7qwt"] Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.260189 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-k6pkt"] Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.267324 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-k6pkt"] Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.363577 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-svn7g"] Feb 28 09:21:02 crc kubenswrapper[4996]: E0228 09:21:02.363936 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="142b4341-66d8-4383-b848-f2159dcefffe" containerName="keystone-bootstrap" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.363956 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="142b4341-66d8-4383-b848-f2159dcefffe" containerName="keystone-bootstrap" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.364159 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="142b4341-66d8-4383-b848-f2159dcefffe" containerName="keystone-bootstrap" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.364674 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.366089 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.368936 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mxpf8" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.369371 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.372302 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.373711 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.400086 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-svn7g"] Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.489357 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-credential-keys\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.489427 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-fernet-keys\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.489502 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-scripts\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.489574 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-config-data\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.489666 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-combined-ca-bundle\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.489696 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5hnz\" (UniqueName: \"kubernetes.io/projected/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-kube-api-access-t5hnz\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.590696 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-scripts\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.590744 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-config-data\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.590789 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-combined-ca-bundle\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.590806 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5hnz\" (UniqueName: \"kubernetes.io/projected/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-kube-api-access-t5hnz\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.590838 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-credential-keys\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.590878 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-fernet-keys\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.596384 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-config-data\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.601134 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-credential-keys\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.601799 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-combined-ca-bundle\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.603984 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-scripts\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.604433 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-fernet-keys\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.606842 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5hnz\" (UniqueName: \"kubernetes.io/projected/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-kube-api-access-t5hnz\") pod \"keystone-bootstrap-svn7g\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.685461 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:02 crc kubenswrapper[4996]: E0228 09:21:02.743072 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 28 09:21:02 crc kubenswrapper[4996]: E0228 09:21:02.743732 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tgfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-kplwp_openstack(db0401da-7bc1-4203-bdfb-2a06deade35b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:21:02 crc kubenswrapper[4996]: E0228 09:21:02.744942 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-kplwp" podUID="db0401da-7bc1-4203-bdfb-2a06deade35b" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.825627 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.998065 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-config\") pod \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.998524 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-sb\") pod \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.998579 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-dns-svc\") pod \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.998620 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-nb\") pod \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " Feb 28 09:21:02 crc kubenswrapper[4996]: I0228 09:21:02.998666 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7q8d\" (UniqueName: \"kubernetes.io/projected/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-kube-api-access-v7q8d\") pod \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\" (UID: \"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1\") " Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.012757 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-kube-api-access-v7q8d" (OuterVolumeSpecName: "kube-api-access-v7q8d") pod "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" (UID: "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1"). InnerVolumeSpecName "kube-api-access-v7q8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.048583 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" (UID: "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.052625 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="142b4341-66d8-4383-b848-f2159dcefffe" path="/var/lib/kubelet/pods/142b4341-66d8-4383-b848-f2159dcefffe/volumes" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.053249 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d51688d-0c98-4246-925e-ad1d4f9ef3d5" path="/var/lib/kubelet/pods/4d51688d-0c98-4246-925e-ad1d4f9ef3d5/volumes" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.053676 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fcb1e96-955c-4fa4-942b-13a451a2d750" path="/var/lib/kubelet/pods/5fcb1e96-955c-4fa4-942b-13a451a2d750/volumes" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.054145 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cccb166-daa1-4c11-ba1c-a36d07cf2772" path="/var/lib/kubelet/pods/8cccb166-daa1-4c11-ba1c-a36d07cf2772/volumes" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.069186 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" (UID: "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.081354 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-config" (OuterVolumeSpecName: "config") pod "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" (UID: "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.085110 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" (UID: "0c7ff272-7fd3-446c-bb7f-dc647e45ccc1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.100785 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.100821 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.100832 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.100871 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7q8d\" (UniqueName: \"kubernetes.io/projected/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-kube-api-access-v7q8d\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.100881 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.167532 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55cc9dfcd4-m6gv8"] Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.179063 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6ccc6bcbc4-2fmz9"] Feb 28 09:21:03 crc kubenswrapper[4996]: W0228 09:21:03.180378 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb605afa6_a344_45f0_b62a_56f46b346c52.slice/crio-a150eddfcbb9fcc741ceac38d8e25af65326bb410f217e81e66bdeb4136e0e1c WatchSource:0}: Error finding container a150eddfcbb9fcc741ceac38d8e25af65326bb410f217e81e66bdeb4136e0e1c: Status 404 returned error can't find the container with id a150eddfcbb9fcc741ceac38d8e25af65326bb410f217e81e66bdeb4136e0e1c Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.289162 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-svn7g"] Feb 28 09:21:03 crc kubenswrapper[4996]: W0228 09:21:03.304292 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaec2d69b_af74_41fe_b5eb_2cd05e40ffde.slice/crio-30a0f09314215632a887d478fae702f2b6b4e5ba073f058bba8092e33c4a8114 WatchSource:0}: Error finding container 30a0f09314215632a887d478fae702f2b6b4e5ba073f058bba8092e33c4a8114: Status 404 returned error can't find the container with id 30a0f09314215632a887d478fae702f2b6b4e5ba073f058bba8092e33c4a8114 Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.695829 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cc9dfcd4-m6gv8" event={"ID":"9dcd95e8-c193-47ef-bc21-acabccfcff53","Type":"ContainerStarted","Data":"7640b0cc663aeb3d7450d0d4559a4a81c8e92821577e1a8d946e06be1fd417f9"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.698002 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-svn7g" event={"ID":"aec2d69b-af74-41fe-b5eb-2cd05e40ffde","Type":"ContainerStarted","Data":"6d747f3601b10c7447e568d9f42aa934e141da09f4b0ede9f8ba8455a64884f4"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.698066 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-svn7g" event={"ID":"aec2d69b-af74-41fe-b5eb-2cd05e40ffde","Type":"ContainerStarted","Data":"30a0f09314215632a887d478fae702f2b6b4e5ba073f058bba8092e33c4a8114"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.702592 4996 generic.go:334] "Generic (PLEG): container finished" podID="3113a2ed-c04d-4f18-9f1d-a47482aa76fa" containerID="2baeee7b0efc41ff490010fdf5d143bc67ba41a430fb206f55c62d62da225a4e" exitCode=0 Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.702650 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nmp42" event={"ID":"3113a2ed-c04d-4f18-9f1d-a47482aa76fa","Type":"ContainerDied","Data":"2baeee7b0efc41ff490010fdf5d143bc67ba41a430fb206f55c62d62da225a4e"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.704541 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" event={"ID":"0c7ff272-7fd3-446c-bb7f-dc647e45ccc1","Type":"ContainerDied","Data":"c5ee502e15374de39bb4e42c4b320f12ce48b2b5fa740350cc99cd9c7426e423"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.704584 4996 scope.go:117] "RemoveContainer" containerID="f2b6b958a5535cc232df69e838ca026c8411fe969cc4f19388d1d7fd7f208fea" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.704602 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-4cm87" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.707769 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2wgr" event={"ID":"05164ff9-4bc2-433a-881c-5046c3352637","Type":"ContainerStarted","Data":"e8b814560f894ad945f8a946d5bfa8b6865ca2c9499f785523f6072478737d52"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.709762 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerStarted","Data":"d9e88988b09b9e0d0341a1da6515d87ea59a6c985165eee72521aff574981628"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.713481 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"0d74e0665ce63a7b1e3ccb10e05382d63c764d169c6c0125d3275a4454729a94"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.720850 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ccc6bcbc4-2fmz9" event={"ID":"b605afa6-a344-45f0-b62a-56f46b346c52","Type":"ContainerStarted","Data":"a150eddfcbb9fcc741ceac38d8e25af65326bb410f217e81e66bdeb4136e0e1c"} Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.720906 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-svn7g" podStartSLOduration=1.720813626 podStartE2EDuration="1.720813626s" podCreationTimestamp="2026-02-28 09:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:03.717597189 +0000 UTC m=+1227.408400000" watchObservedRunningTime="2026-02-28 09:21:03.720813626 +0000 UTC m=+1227.411616437" Feb 28 09:21:03 crc kubenswrapper[4996]: E0228 09:21:03.725416 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-kplwp" podUID="db0401da-7bc1-4203-bdfb-2a06deade35b" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.765551 4996 scope.go:117] "RemoveContainer" containerID="7666b7fb4d485808a56ff4b76748465078c410d162e2121e6a0a3ecb40142409" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.782048 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-j2wgr" podStartSLOduration=2.906661029 podStartE2EDuration="28.782030582s" podCreationTimestamp="2026-02-28 09:20:35 +0000 UTC" firstStartedPulling="2026-02-28 09:20:36.804451103 +0000 UTC m=+1200.495253914" lastFinishedPulling="2026-02-28 09:21:02.679820656 +0000 UTC m=+1226.370623467" observedRunningTime="2026-02-28 09:21:03.775204907 +0000 UTC m=+1227.466007728" watchObservedRunningTime="2026-02-28 09:21:03.782030582 +0000 UTC m=+1227.472833403" Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.804116 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4cm87"] Feb 28 09:21:03 crc kubenswrapper[4996]: I0228 09:21:03.814984 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-4cm87"] Feb 28 09:21:04 crc kubenswrapper[4996]: I0228 09:21:04.746189 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerStarted","Data":"aa346bb74d259903bd3ce9eb8ca54e81f88e7a9d9a3838c86d0809224d6f31f4"} Feb 28 09:21:04 crc kubenswrapper[4996]: I0228 09:21:04.749432 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ccc6bcbc4-2fmz9" event={"ID":"b605afa6-a344-45f0-b62a-56f46b346c52","Type":"ContainerStarted","Data":"734240abb659aa3434628c759cc1a37b81db8ac2f3b1e1b98730f90f5d11c636"} Feb 28 09:21:04 crc kubenswrapper[4996]: I0228 09:21:04.749470 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6ccc6bcbc4-2fmz9" event={"ID":"b605afa6-a344-45f0-b62a-56f46b346c52","Type":"ContainerStarted","Data":"06ac4f200ecb755ba0ae114f182f8e1a7941ef326bd3216882327fe8543d13e3"} Feb 28 09:21:04 crc kubenswrapper[4996]: I0228 09:21:04.754350 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cc9dfcd4-m6gv8" event={"ID":"9dcd95e8-c193-47ef-bc21-acabccfcff53","Type":"ContainerStarted","Data":"439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284"} Feb 28 09:21:04 crc kubenswrapper[4996]: I0228 09:21:04.754398 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cc9dfcd4-m6gv8" event={"ID":"9dcd95e8-c193-47ef-bc21-acabccfcff53","Type":"ContainerStarted","Data":"d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a"} Feb 28 09:21:04 crc kubenswrapper[4996]: I0228 09:21:04.785413 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6ccc6bcbc4-2fmz9" podStartSLOduration=20.823239587 podStartE2EDuration="21.785385555s" podCreationTimestamp="2026-02-28 09:20:43 +0000 UTC" firstStartedPulling="2026-02-28 09:21:03.185439658 +0000 UTC m=+1226.876242479" lastFinishedPulling="2026-02-28 09:21:04.147585636 +0000 UTC m=+1227.838388447" observedRunningTime="2026-02-28 09:21:04.77689158 +0000 UTC m=+1228.467694411" watchObservedRunningTime="2026-02-28 09:21:04.785385555 +0000 UTC m=+1228.476188386" Feb 28 09:21:04 crc kubenswrapper[4996]: I0228 09:21:04.817769 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-55cc9dfcd4-m6gv8" podStartSLOduration=21.339809981 podStartE2EDuration="21.817742675s" podCreationTimestamp="2026-02-28 09:20:43 +0000 UTC" firstStartedPulling="2026-02-28 09:21:03.171686696 +0000 UTC m=+1226.862489507" lastFinishedPulling="2026-02-28 09:21:03.6496194 +0000 UTC m=+1227.340422201" observedRunningTime="2026-02-28 09:21:04.797532107 +0000 UTC m=+1228.488334938" watchObservedRunningTime="2026-02-28 09:21:04.817742675 +0000 UTC m=+1228.508545496" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.053451 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" path="/var/lib/kubelet/pods/0c7ff272-7fd3-446c-bb7f-dc647e45ccc1/volumes" Feb 28 09:21:05 crc kubenswrapper[4996]: E0228 09:21:05.113515 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05164ff9_4bc2_433a_881c_5046c3352637.slice/crio-e8b814560f894ad945f8a946d5bfa8b6865ca2c9499f785523f6072478737d52.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05164ff9_4bc2_433a_881c_5046c3352637.slice/crio-conmon-e8b814560f894ad945f8a946d5bfa8b6865ca2c9499f785523f6072478737d52.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.178681 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nmp42" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.346452 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr8vl\" (UniqueName: \"kubernetes.io/projected/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-kube-api-access-rr8vl\") pod \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.346931 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-combined-ca-bundle\") pod \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.347032 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-config\") pod \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\" (UID: \"3113a2ed-c04d-4f18-9f1d-a47482aa76fa\") " Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.365239 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-kube-api-access-rr8vl" (OuterVolumeSpecName: "kube-api-access-rr8vl") pod "3113a2ed-c04d-4f18-9f1d-a47482aa76fa" (UID: "3113a2ed-c04d-4f18-9f1d-a47482aa76fa"). InnerVolumeSpecName "kube-api-access-rr8vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.374636 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3113a2ed-c04d-4f18-9f1d-a47482aa76fa" (UID: "3113a2ed-c04d-4f18-9f1d-a47482aa76fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.382345 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-config" (OuterVolumeSpecName: "config") pod "3113a2ed-c04d-4f18-9f1d-a47482aa76fa" (UID: "3113a2ed-c04d-4f18-9f1d-a47482aa76fa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.449434 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.449466 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.449476 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr8vl\" (UniqueName: \"kubernetes.io/projected/3113a2ed-c04d-4f18-9f1d-a47482aa76fa-kube-api-access-rr8vl\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.796359 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-nmp42" event={"ID":"3113a2ed-c04d-4f18-9f1d-a47482aa76fa","Type":"ContainerDied","Data":"9791d4819e47f9bdd941af74f85ad5ee44c5450d88a6c53f2ff13344bd4bf913"} Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.796403 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9791d4819e47f9bdd941af74f85ad5ee44c5450d88a6c53f2ff13344bd4bf913" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.796411 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-nmp42" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.803936 4996 generic.go:334] "Generic (PLEG): container finished" podID="05164ff9-4bc2-433a-881c-5046c3352637" containerID="e8b814560f894ad945f8a946d5bfa8b6865ca2c9499f785523f6072478737d52" exitCode=0 Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.804020 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2wgr" event={"ID":"05164ff9-4bc2-433a-881c-5046c3352637","Type":"ContainerDied","Data":"e8b814560f894ad945f8a946d5bfa8b6865ca2c9499f785523f6072478737d52"} Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.986620 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-xfbnm"] Feb 28 09:21:05 crc kubenswrapper[4996]: E0228 09:21:05.987022 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="dnsmasq-dns" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.987041 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="dnsmasq-dns" Feb 28 09:21:05 crc kubenswrapper[4996]: E0228 09:21:05.987059 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="init" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.987065 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="init" Feb 28 09:21:05 crc kubenswrapper[4996]: E0228 09:21:05.987079 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3113a2ed-c04d-4f18-9f1d-a47482aa76fa" containerName="neutron-db-sync" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.987085 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3113a2ed-c04d-4f18-9f1d-a47482aa76fa" containerName="neutron-db-sync" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.987250 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7ff272-7fd3-446c-bb7f-dc647e45ccc1" containerName="dnsmasq-dns" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.987260 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3113a2ed-c04d-4f18-9f1d-a47482aa76fa" containerName="neutron-db-sync" Feb 28 09:21:05 crc kubenswrapper[4996]: I0228 09:21:05.988110 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.012581 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-xfbnm"] Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.062646 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-dns-svc\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.062690 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6x2q\" (UniqueName: \"kubernetes.io/projected/1aa63814-587f-4839-bf7f-4bc1e02f8704-kube-api-access-c6x2q\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.062709 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.062738 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-config\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.062763 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.069561 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58bbf8b97d-2bk65"] Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.077358 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.082807 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gqbdj" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.082888 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.083171 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.083630 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.084133 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58bbf8b97d-2bk65"] Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.164501 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-dns-svc\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.164550 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6x2q\" (UniqueName: \"kubernetes.io/projected/1aa63814-587f-4839-bf7f-4bc1e02f8704-kube-api-access-c6x2q\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.164571 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.164604 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-config\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.164630 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.165723 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-nb\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.166829 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-dns-svc\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.167623 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-sb\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.168181 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-config\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.262253 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6x2q\" (UniqueName: \"kubernetes.io/projected/1aa63814-587f-4839-bf7f-4bc1e02f8704-kube-api-access-c6x2q\") pod \"dnsmasq-dns-7b946d459c-xfbnm\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.267134 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-config\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.267230 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-httpd-config\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.267258 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-ovndb-tls-certs\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.267329 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shpvl\" (UniqueName: \"kubernetes.io/projected/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-kube-api-access-shpvl\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.267400 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-combined-ca-bundle\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.332347 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.368602 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-httpd-config\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.368658 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-ovndb-tls-certs\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.368704 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shpvl\" (UniqueName: \"kubernetes.io/projected/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-kube-api-access-shpvl\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.368769 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-combined-ca-bundle\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.368822 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-config\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.374438 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-combined-ca-bundle\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.374788 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-ovndb-tls-certs\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.376699 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-httpd-config\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.386136 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-config\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.388820 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shpvl\" (UniqueName: \"kubernetes.io/projected/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-kube-api-access-shpvl\") pod \"neutron-58bbf8b97d-2bk65\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:06 crc kubenswrapper[4996]: I0228 09:21:06.438546 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.341760 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2wgr" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.498151 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-config-data\") pod \"05164ff9-4bc2-433a-881c-5046c3352637\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.498826 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6s7p\" (UniqueName: \"kubernetes.io/projected/05164ff9-4bc2-433a-881c-5046c3352637-kube-api-access-k6s7p\") pod \"05164ff9-4bc2-433a-881c-5046c3352637\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.498868 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05164ff9-4bc2-433a-881c-5046c3352637-logs\") pod \"05164ff9-4bc2-433a-881c-5046c3352637\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.498984 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-scripts\") pod \"05164ff9-4bc2-433a-881c-5046c3352637\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.499052 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-combined-ca-bundle\") pod \"05164ff9-4bc2-433a-881c-5046c3352637\" (UID: \"05164ff9-4bc2-433a-881c-5046c3352637\") " Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.500045 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05164ff9-4bc2-433a-881c-5046c3352637-logs" (OuterVolumeSpecName: "logs") pod "05164ff9-4bc2-433a-881c-5046c3352637" (UID: "05164ff9-4bc2-433a-881c-5046c3352637"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.519459 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-scripts" (OuterVolumeSpecName: "scripts") pod "05164ff9-4bc2-433a-881c-5046c3352637" (UID: "05164ff9-4bc2-433a-881c-5046c3352637"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.523511 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05164ff9-4bc2-433a-881c-5046c3352637-kube-api-access-k6s7p" (OuterVolumeSpecName: "kube-api-access-k6s7p") pod "05164ff9-4bc2-433a-881c-5046c3352637" (UID: "05164ff9-4bc2-433a-881c-5046c3352637"). InnerVolumeSpecName "kube-api-access-k6s7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.535357 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05164ff9-4bc2-433a-881c-5046c3352637" (UID: "05164ff9-4bc2-433a-881c-5046c3352637"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.540164 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-config-data" (OuterVolumeSpecName: "config-data") pod "05164ff9-4bc2-433a-881c-5046c3352637" (UID: "05164ff9-4bc2-433a-881c-5046c3352637"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.600720 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6s7p\" (UniqueName: \"kubernetes.io/projected/05164ff9-4bc2-433a-881c-5046c3352637-kube-api-access-k6s7p\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.600762 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05164ff9-4bc2-433a-881c-5046c3352637-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.600772 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.600782 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.600790 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05164ff9-4bc2-433a-881c-5046c3352637-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:07 crc kubenswrapper[4996]: W0228 09:21:07.766278 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aa63814_587f_4839_bf7f_4bc1e02f8704.slice/crio-c0938dd664e9b2a14382df9efa4a75c72b5eb7164c902d6af1906dd3b9fc6533 WatchSource:0}: Error finding container c0938dd664e9b2a14382df9efa4a75c72b5eb7164c902d6af1906dd3b9fc6533: Status 404 returned error can't find the container with id c0938dd664e9b2a14382df9efa4a75c72b5eb7164c902d6af1906dd3b9fc6533 Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.768628 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-xfbnm"] Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.824936 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-j2wgr" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.824930 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-j2wgr" event={"ID":"05164ff9-4bc2-433a-881c-5046c3352637","Type":"ContainerDied","Data":"b549ea293af69e7dbeaa1beac1e2fd2b11d040e63d4b93f0f4737cb457d8524d"} Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.825048 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b549ea293af69e7dbeaa1beac1e2fd2b11d040e63d4b93f0f4737cb457d8524d" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.826391 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" event={"ID":"1aa63814-587f-4839-bf7f-4bc1e02f8704","Type":"ContainerStarted","Data":"c0938dd664e9b2a14382df9efa4a75c72b5eb7164c902d6af1906dd3b9fc6533"} Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.942082 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c54f964b4-2pr6w"] Feb 28 09:21:07 crc kubenswrapper[4996]: E0228 09:21:07.942750 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05164ff9-4bc2-433a-881c-5046c3352637" containerName="placement-db-sync" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.942766 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="05164ff9-4bc2-433a-881c-5046c3352637" containerName="placement-db-sync" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.942965 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="05164ff9-4bc2-433a-881c-5046c3352637" containerName="placement-db-sync" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.943862 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.947274 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-cbv48" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.947556 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.947787 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.948159 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.948298 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 28 09:21:07 crc kubenswrapper[4996]: I0228 09:21:07.966583 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c54f964b4-2pr6w"] Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.008765 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-public-tls-certs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.008858 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-logs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.008987 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-combined-ca-bundle\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.009074 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-scripts\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.009140 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2b4\" (UniqueName: \"kubernetes.io/projected/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-kube-api-access-qw2b4\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.009180 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-internal-tls-certs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.009593 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-config-data\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.111406 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-public-tls-certs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.111459 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-logs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.111501 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-combined-ca-bundle\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.111541 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-scripts\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.111585 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2b4\" (UniqueName: \"kubernetes.io/projected/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-kube-api-access-qw2b4\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.111613 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-internal-tls-certs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.111699 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-config-data\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.111926 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-logs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.117181 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-combined-ca-bundle\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.117388 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-scripts\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.118564 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-public-tls-certs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.119174 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-internal-tls-certs\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.121596 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-config-data\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.128153 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2b4\" (UniqueName: \"kubernetes.io/projected/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-kube-api-access-qw2b4\") pod \"placement-c54f964b4-2pr6w\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.264808 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.274391 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67b5c7c7f7-mzzc4"] Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.275723 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.285364 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.285665 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.297982 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67b5c7c7f7-mzzc4"] Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.313358 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6cbp\" (UniqueName: \"kubernetes.io/projected/49585240-7b27-458c-8d70-d23d8326bb94-kube-api-access-n6cbp\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.313416 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-config\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.313439 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-httpd-config\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.313462 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-public-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.313502 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-ovndb-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.313634 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-combined-ca-bundle\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.313685 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-internal-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.414309 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-combined-ca-bundle\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.414360 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-internal-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.414408 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6cbp\" (UniqueName: \"kubernetes.io/projected/49585240-7b27-458c-8d70-d23d8326bb94-kube-api-access-n6cbp\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.414449 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-config\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.414469 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-httpd-config\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.414489 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-public-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.414523 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-ovndb-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.420798 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-combined-ca-bundle\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.421654 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-public-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.424665 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-ovndb-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.424915 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-internal-tls-certs\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.427379 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-httpd-config\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.428108 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-config\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.438682 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6cbp\" (UniqueName: \"kubernetes.io/projected/49585240-7b27-458c-8d70-d23d8326bb94-kube-api-access-n6cbp\") pod \"neutron-67b5c7c7f7-mzzc4\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.611567 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.840721 4996 generic.go:334] "Generic (PLEG): container finished" podID="aec2d69b-af74-41fe-b5eb-2cd05e40ffde" containerID="6d747f3601b10c7447e568d9f42aa934e141da09f4b0ede9f8ba8455a64884f4" exitCode=0 Feb 28 09:21:08 crc kubenswrapper[4996]: I0228 09:21:08.840776 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-svn7g" event={"ID":"aec2d69b-af74-41fe-b5eb-2cd05e40ffde","Type":"ContainerDied","Data":"6d747f3601b10c7447e568d9f42aa934e141da09f4b0ede9f8ba8455a64884f4"} Feb 28 09:21:09 crc kubenswrapper[4996]: I0228 09:21:09.024324 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58bbf8b97d-2bk65"] Feb 28 09:21:11 crc kubenswrapper[4996]: W0228 09:21:11.976042 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b6ecc3c_f37f_4ae1_ad98_b71af059ca5e.slice/crio-801b702939741d7fcc43af4d633731fcfb36e18748b04db60b21fab2bb0e8e24 WatchSource:0}: Error finding container 801b702939741d7fcc43af4d633731fcfb36e18748b04db60b21fab2bb0e8e24: Status 404 returned error can't find the container with id 801b702939741d7fcc43af4d633731fcfb36e18748b04db60b21fab2bb0e8e24 Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.304246 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.485542 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5hnz\" (UniqueName: \"kubernetes.io/projected/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-kube-api-access-t5hnz\") pod \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.485605 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-scripts\") pod \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.485715 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-fernet-keys\") pod \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.485747 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-combined-ca-bundle\") pod \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.485790 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-config-data\") pod \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.485850 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-credential-keys\") pod \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\" (UID: \"aec2d69b-af74-41fe-b5eb-2cd05e40ffde\") " Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.488981 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-scripts" (OuterVolumeSpecName: "scripts") pod "aec2d69b-af74-41fe-b5eb-2cd05e40ffde" (UID: "aec2d69b-af74-41fe-b5eb-2cd05e40ffde"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.489571 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "aec2d69b-af74-41fe-b5eb-2cd05e40ffde" (UID: "aec2d69b-af74-41fe-b5eb-2cd05e40ffde"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.490289 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-kube-api-access-t5hnz" (OuterVolumeSpecName: "kube-api-access-t5hnz") pod "aec2d69b-af74-41fe-b5eb-2cd05e40ffde" (UID: "aec2d69b-af74-41fe-b5eb-2cd05e40ffde"). InnerVolumeSpecName "kube-api-access-t5hnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.490388 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "aec2d69b-af74-41fe-b5eb-2cd05e40ffde" (UID: "aec2d69b-af74-41fe-b5eb-2cd05e40ffde"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.511911 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-config-data" (OuterVolumeSpecName: "config-data") pod "aec2d69b-af74-41fe-b5eb-2cd05e40ffde" (UID: "aec2d69b-af74-41fe-b5eb-2cd05e40ffde"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.526470 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aec2d69b-af74-41fe-b5eb-2cd05e40ffde" (UID: "aec2d69b-af74-41fe-b5eb-2cd05e40ffde"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.587554 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.587827 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.587840 4996 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.587849 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5hnz\" (UniqueName: \"kubernetes.io/projected/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-kube-api-access-t5hnz\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.587862 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.587870 4996 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/aec2d69b-af74-41fe-b5eb-2cd05e40ffde-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:12 crc kubenswrapper[4996]: W0228 09:21:12.588403 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod115e68d8_a2a1_4c21_ae7f_2ec4e47855f9.slice/crio-b185a452bc5dac613118701892db6278ac29b6cb0c7b724ddabd0b72bbe465ba WatchSource:0}: Error finding container b185a452bc5dac613118701892db6278ac29b6cb0c7b724ddabd0b72bbe465ba: Status 404 returned error can't find the container with id b185a452bc5dac613118701892db6278ac29b6cb0c7b724ddabd0b72bbe465ba Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.598381 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c54f964b4-2pr6w"] Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.626644 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67b5c7c7f7-mzzc4"] Feb 28 09:21:12 crc kubenswrapper[4996]: W0228 09:21:12.626734 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49585240_7b27_458c_8d70_d23d8326bb94.slice/crio-3206d9dd40f49a9f7a6438140869c2a4eb24cea75548ba0290fbc9b29b31b88f WatchSource:0}: Error finding container 3206d9dd40f49a9f7a6438140869c2a4eb24cea75548ba0290fbc9b29b31b88f: Status 404 returned error can't find the container with id 3206d9dd40f49a9f7a6438140869c2a4eb24cea75548ba0290fbc9b29b31b88f Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.877522 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58bbf8b97d-2bk65" event={"ID":"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e","Type":"ContainerStarted","Data":"6dac997efbef4dfeb46bf2cc75a8674846f13afc16332a4e091183026d62ee72"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.877954 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58bbf8b97d-2bk65" event={"ID":"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e","Type":"ContainerStarted","Data":"4c1c68b34ee07c2f31465106cd4577c27aa1e33f204c0a55ffe4b78f3d9514c9"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.878031 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58bbf8b97d-2bk65" event={"ID":"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e","Type":"ContainerStarted","Data":"801b702939741d7fcc43af4d633731fcfb36e18748b04db60b21fab2bb0e8e24"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.878249 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.881754 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerStarted","Data":"176e5ae954cc844c181150fbdb7c3fdfcd2881c2cc6c10023d8a60814fb6dcc7"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.883904 4996 generic.go:334] "Generic (PLEG): container finished" podID="1aa63814-587f-4839-bf7f-4bc1e02f8704" containerID="c17d816056f65a478e6a81374dbd561ff7bccc80162d2050e9f031a6b076126c" exitCode=0 Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.884037 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" event={"ID":"1aa63814-587f-4839-bf7f-4bc1e02f8704","Type":"ContainerDied","Data":"c17d816056f65a478e6a81374dbd561ff7bccc80162d2050e9f031a6b076126c"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.887390 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c54f964b4-2pr6w" event={"ID":"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9","Type":"ContainerStarted","Data":"51098c0364485dd40a4a50cfc6729823f2c3411e9f760bff229ecf1ce961a278"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.887449 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c54f964b4-2pr6w" event={"ID":"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9","Type":"ContainerStarted","Data":"b185a452bc5dac613118701892db6278ac29b6cb0c7b724ddabd0b72bbe465ba"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.890347 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67b5c7c7f7-mzzc4" event={"ID":"49585240-7b27-458c-8d70-d23d8326bb94","Type":"ContainerStarted","Data":"b7a95f3fc3578ddf33e205b07480ce2fe00ba8f0d6f4c709dce74c1bc923a0be"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.890431 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67b5c7c7f7-mzzc4" event={"ID":"49585240-7b27-458c-8d70-d23d8326bb94","Type":"ContainerStarted","Data":"3206d9dd40f49a9f7a6438140869c2a4eb24cea75548ba0290fbc9b29b31b88f"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.895449 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-svn7g" event={"ID":"aec2d69b-af74-41fe-b5eb-2cd05e40ffde","Type":"ContainerDied","Data":"30a0f09314215632a887d478fae702f2b6b4e5ba073f058bba8092e33c4a8114"} Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.895661 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30a0f09314215632a887d478fae702f2b6b4e5ba073f058bba8092e33c4a8114" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.895797 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-svn7g" Feb 28 09:21:12 crc kubenswrapper[4996]: I0228 09:21:12.900398 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58bbf8b97d-2bk65" podStartSLOduration=6.90037701 podStartE2EDuration="6.90037701s" podCreationTimestamp="2026-02-28 09:21:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:12.897689095 +0000 UTC m=+1236.588491906" watchObservedRunningTime="2026-02-28 09:21:12.90037701 +0000 UTC m=+1236.591179831" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.420792 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-644f7b559b-gngw5"] Feb 28 09:21:13 crc kubenswrapper[4996]: E0228 09:21:13.421642 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec2d69b-af74-41fe-b5eb-2cd05e40ffde" containerName="keystone-bootstrap" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.421657 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec2d69b-af74-41fe-b5eb-2cd05e40ffde" containerName="keystone-bootstrap" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.421987 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec2d69b-af74-41fe-b5eb-2cd05e40ffde" containerName="keystone-bootstrap" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.432219 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.438886 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.439043 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.439206 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.439336 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.439651 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.450478 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mxpf8" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.478153 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-644f7b559b-gngw5"] Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.603067 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-internal-tls-certs\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.603205 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-fernet-keys\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.603244 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-config-data\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.603271 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-public-tls-certs\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.603305 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-credential-keys\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.603422 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-scripts\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.603459 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9dq8\" (UniqueName: \"kubernetes.io/projected/736c34b0-e2b3-4d08-be5b-53491a475d18-kube-api-access-z9dq8\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.603513 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-combined-ca-bundle\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.705482 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-scripts\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.705546 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9dq8\" (UniqueName: \"kubernetes.io/projected/736c34b0-e2b3-4d08-be5b-53491a475d18-kube-api-access-z9dq8\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.705601 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-combined-ca-bundle\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.705629 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-internal-tls-certs\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.705696 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-fernet-keys\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.705720 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-config-data\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.705743 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-public-tls-certs\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.705772 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-credential-keys\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.711146 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-credential-keys\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.711554 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-combined-ca-bundle\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.712252 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-config-data\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.712499 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-internal-tls-certs\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.713352 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-public-tls-certs\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.713531 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-scripts\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.719388 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/736c34b0-e2b3-4d08-be5b-53491a475d18-fernet-keys\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.723537 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9dq8\" (UniqueName: \"kubernetes.io/projected/736c34b0-e2b3-4d08-be5b-53491a475d18-kube-api-access-z9dq8\") pod \"keystone-644f7b559b-gngw5\" (UID: \"736c34b0-e2b3-4d08-be5b-53491a475d18\") " pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.768692 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.781023 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.781060 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.905757 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.908074 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.911485 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" event={"ID":"1aa63814-587f-4839-bf7f-4bc1e02f8704","Type":"ContainerStarted","Data":"ce842fd81361d42e7c8b9c73cfb896c359f0504e5bfbe215601faf167ab93b2d"} Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.912517 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.968478 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" podStartSLOduration=8.968458583 podStartE2EDuration="8.968458583s" podCreationTimestamp="2026-02-28 09:21:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:13.95175882 +0000 UTC m=+1237.642561641" watchObservedRunningTime="2026-02-28 09:21:13.968458583 +0000 UTC m=+1237.659261394" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.994826 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c54f964b4-2pr6w" event={"ID":"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9","Type":"ContainerStarted","Data":"847717fb96d7153b1ea89ea7c4cb33a2cdc4173d05d5e4bb193f470609c8d7af"} Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.996093 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:13 crc kubenswrapper[4996]: I0228 09:21:13.996125 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:14 crc kubenswrapper[4996]: I0228 09:21:14.035880 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67b5c7c7f7-mzzc4" event={"ID":"49585240-7b27-458c-8d70-d23d8326bb94","Type":"ContainerStarted","Data":"541161af47e15e689a7869b89d735ee7e1ec2e404f5bdd2a6c84cafb7ae9f3bf"} Feb 28 09:21:14 crc kubenswrapper[4996]: I0228 09:21:14.036139 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:14 crc kubenswrapper[4996]: I0228 09:21:14.072156 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c54f964b4-2pr6w" podStartSLOduration=7.072131064 podStartE2EDuration="7.072131064s" podCreationTimestamp="2026-02-28 09:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:14.04926809 +0000 UTC m=+1237.740070911" watchObservedRunningTime="2026-02-28 09:21:14.072131064 +0000 UTC m=+1237.762933895" Feb 28 09:21:14 crc kubenswrapper[4996]: I0228 09:21:14.121346 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67b5c7c7f7-mzzc4" podStartSLOduration=6.121324536 podStartE2EDuration="6.121324536s" podCreationTimestamp="2026-02-28 09:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:14.099283992 +0000 UTC m=+1237.790086823" watchObservedRunningTime="2026-02-28 09:21:14.121324536 +0000 UTC m=+1237.812127337" Feb 28 09:21:14 crc kubenswrapper[4996]: I0228 09:21:14.402453 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-644f7b559b-gngw5"] Feb 28 09:21:15 crc kubenswrapper[4996]: I0228 09:21:15.045353 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-644f7b559b-gngw5" event={"ID":"736c34b0-e2b3-4d08-be5b-53491a475d18","Type":"ContainerStarted","Data":"86245d6eb7f81691ee338f4e9d313b96b5cdc24e4afdd5dffe8db818486f8c50"} Feb 28 09:21:15 crc kubenswrapper[4996]: I0228 09:21:15.045634 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-644f7b559b-gngw5" event={"ID":"736c34b0-e2b3-4d08-be5b-53491a475d18","Type":"ContainerStarted","Data":"4e05e898a214e8c6c939e26055ae084f99a81b1f7995885e933cc02ced6c16e4"} Feb 28 09:21:15 crc kubenswrapper[4996]: I0228 09:21:15.064744 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-644f7b559b-gngw5" podStartSLOduration=2.064726879 podStartE2EDuration="2.064726879s" podCreationTimestamp="2026-02-28 09:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:15.06187266 +0000 UTC m=+1238.752675481" watchObservedRunningTime="2026-02-28 09:21:15.064726879 +0000 UTC m=+1238.755529690" Feb 28 09:21:16 crc kubenswrapper[4996]: I0228 09:21:16.053535 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.121942 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerStarted","Data":"e22715be76497be0a48f352176fb6318b60e2a22c7297c6b086aff63727d492a"} Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.122347 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="sg-core" containerID="cri-o://176e5ae954cc844c181150fbdb7c3fdfcd2881c2cc6c10023d8a60814fb6dcc7" gracePeriod=30 Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.122364 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.122327 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="ceilometer-notification-agent" containerID="cri-o://aa346bb74d259903bd3ce9eb8ca54e81f88e7a9d9a3838c86d0809224d6f31f4" gracePeriod=30 Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.122070 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="ceilometer-central-agent" containerID="cri-o://d9e88988b09b9e0d0341a1da6515d87ea59a6c985165eee72521aff574981628" gracePeriod=30 Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.122175 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="proxy-httpd" containerID="cri-o://e22715be76497be0a48f352176fb6318b60e2a22c7297c6b086aff63727d492a" gracePeriod=30 Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.128116 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2fdzl" event={"ID":"531cd3d1-8618-42d1-88a1-b23b8ca9be62","Type":"ContainerStarted","Data":"14c78f8e5f74c5db001d96c96cd93a4d11bacab40797dd985e28ad02ff86d07c"} Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.152813 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.084059994 podStartE2EDuration="46.152768262s" podCreationTimestamp="2026-02-28 09:20:35 +0000 UTC" firstStartedPulling="2026-02-28 09:20:36.523804546 +0000 UTC m=+1200.214607357" lastFinishedPulling="2026-02-28 09:21:20.592512774 +0000 UTC m=+1244.283315625" observedRunningTime="2026-02-28 09:21:21.146422978 +0000 UTC m=+1244.837225789" watchObservedRunningTime="2026-02-28 09:21:21.152768262 +0000 UTC m=+1244.843571073" Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.164481 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2fdzl" podStartSLOduration=2.217291367 podStartE2EDuration="46.164460534s" podCreationTimestamp="2026-02-28 09:20:35 +0000 UTC" firstStartedPulling="2026-02-28 09:20:36.646495635 +0000 UTC m=+1200.337298446" lastFinishedPulling="2026-02-28 09:21:20.593664802 +0000 UTC m=+1244.284467613" observedRunningTime="2026-02-28 09:21:21.162649351 +0000 UTC m=+1244.853452172" watchObservedRunningTime="2026-02-28 09:21:21.164460534 +0000 UTC m=+1244.855263345" Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.334793 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.401479 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vtjbq"] Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.401813 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" podUID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" containerName="dnsmasq-dns" containerID="cri-o://c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c" gracePeriod=10 Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.908563 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.984929 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-nb\") pod \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.985018 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-sb\") pod \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.985072 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-dns-svc\") pod \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.985148 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs4sr\" (UniqueName: \"kubernetes.io/projected/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-kube-api-access-zs4sr\") pod \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.985223 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-config\") pod \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\" (UID: \"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2\") " Feb 28 09:21:21 crc kubenswrapper[4996]: I0228 09:21:21.992231 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-kube-api-access-zs4sr" (OuterVolumeSpecName: "kube-api-access-zs4sr") pod "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" (UID: "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2"). InnerVolumeSpecName "kube-api-access-zs4sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.021872 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" (UID: "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.023602 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-config" (OuterVolumeSpecName: "config") pod "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" (UID: "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.037538 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" (UID: "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.057606 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" (UID: "64ea5c66-9d4b-4bdd-b71b-b769e0273ae2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.087025 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.087055 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.087065 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.087074 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.087082 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs4sr\" (UniqueName: \"kubernetes.io/projected/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2-kube-api-access-zs4sr\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.136817 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kplwp" event={"ID":"db0401da-7bc1-4203-bdfb-2a06deade35b","Type":"ContainerStarted","Data":"a2f4f879a94ae89682e11bd2bcdf5ab11880c35ef1c3ac08f474201f62261270"} Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.139577 4996 generic.go:334] "Generic (PLEG): container finished" podID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" containerID="c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c" exitCode=0 Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.139627 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.139638 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" event={"ID":"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2","Type":"ContainerDied","Data":"c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c"} Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.139756 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-vtjbq" event={"ID":"64ea5c66-9d4b-4bdd-b71b-b769e0273ae2","Type":"ContainerDied","Data":"07c87d2d327c5f118492178d2db11586d56b896ee7a4b414d5009823765936b3"} Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.139792 4996 scope.go:117] "RemoveContainer" containerID="c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.161968 4996 generic.go:334] "Generic (PLEG): container finished" podID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerID="e22715be76497be0a48f352176fb6318b60e2a22c7297c6b086aff63727d492a" exitCode=0 Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.161999 4996 generic.go:334] "Generic (PLEG): container finished" podID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerID="176e5ae954cc844c181150fbdb7c3fdfcd2881c2cc6c10023d8a60814fb6dcc7" exitCode=2 Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.162027 4996 generic.go:334] "Generic (PLEG): container finished" podID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerID="aa346bb74d259903bd3ce9eb8ca54e81f88e7a9d9a3838c86d0809224d6f31f4" exitCode=0 Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.162034 4996 generic.go:334] "Generic (PLEG): container finished" podID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerID="d9e88988b09b9e0d0341a1da6515d87ea59a6c985165eee72521aff574981628" exitCode=0 Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.162052 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerDied","Data":"e22715be76497be0a48f352176fb6318b60e2a22c7297c6b086aff63727d492a"} Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.162075 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerDied","Data":"176e5ae954cc844c181150fbdb7c3fdfcd2881c2cc6c10023d8a60814fb6dcc7"} Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.162084 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerDied","Data":"aa346bb74d259903bd3ce9eb8ca54e81f88e7a9d9a3838c86d0809224d6f31f4"} Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.162095 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerDied","Data":"d9e88988b09b9e0d0341a1da6515d87ea59a6c985165eee72521aff574981628"} Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.187472 4996 scope.go:117] "RemoveContainer" containerID="8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.195754 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-kplwp" podStartSLOduration=3.5405465449999998 podStartE2EDuration="48.195725096s" podCreationTimestamp="2026-02-28 09:20:34 +0000 UTC" firstStartedPulling="2026-02-28 09:20:35.946126077 +0000 UTC m=+1199.636928888" lastFinishedPulling="2026-02-28 09:21:20.601304618 +0000 UTC m=+1244.292107439" observedRunningTime="2026-02-28 09:21:22.167244557 +0000 UTC m=+1245.858047368" watchObservedRunningTime="2026-02-28 09:21:22.195725096 +0000 UTC m=+1245.886527907" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.213511 4996 scope.go:117] "RemoveContainer" containerID="c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c" Feb 28 09:21:22 crc kubenswrapper[4996]: E0228 09:21:22.216177 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c\": container with ID starting with c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c not found: ID does not exist" containerID="c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.216217 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c"} err="failed to get container status \"c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c\": rpc error: code = NotFound desc = could not find container \"c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c\": container with ID starting with c69beff7dcbe8db4841facec1e57b359f4b0e222d4a6b594c007523814b94f7c not found: ID does not exist" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.216243 4996 scope.go:117] "RemoveContainer" containerID="8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda" Feb 28 09:21:22 crc kubenswrapper[4996]: E0228 09:21:22.216544 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda\": container with ID starting with 8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda not found: ID does not exist" containerID="8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.216574 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda"} err="failed to get container status \"8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda\": rpc error: code = NotFound desc = could not find container \"8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda\": container with ID starting with 8004f56a9275c450a090b2c9a4224cf699af5d9e5b5584fa9321d5e62aeb9eda not found: ID does not exist" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.227091 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vtjbq"] Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.247140 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-vtjbq"] Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.287969 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.395768 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq84b\" (UniqueName: \"kubernetes.io/projected/1f109701-52a5-4a26-ae21-415ebc0d21ff-kube-api-access-xq84b\") pod \"1f109701-52a5-4a26-ae21-415ebc0d21ff\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.395873 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-log-httpd\") pod \"1f109701-52a5-4a26-ae21-415ebc0d21ff\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.395897 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-run-httpd\") pod \"1f109701-52a5-4a26-ae21-415ebc0d21ff\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.395954 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-sg-core-conf-yaml\") pod \"1f109701-52a5-4a26-ae21-415ebc0d21ff\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.396025 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-scripts\") pod \"1f109701-52a5-4a26-ae21-415ebc0d21ff\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.396105 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-combined-ca-bundle\") pod \"1f109701-52a5-4a26-ae21-415ebc0d21ff\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.396144 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-config-data\") pod \"1f109701-52a5-4a26-ae21-415ebc0d21ff\" (UID: \"1f109701-52a5-4a26-ae21-415ebc0d21ff\") " Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.397460 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1f109701-52a5-4a26-ae21-415ebc0d21ff" (UID: "1f109701-52a5-4a26-ae21-415ebc0d21ff"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.397683 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1f109701-52a5-4a26-ae21-415ebc0d21ff" (UID: "1f109701-52a5-4a26-ae21-415ebc0d21ff"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.400592 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-scripts" (OuterVolumeSpecName: "scripts") pod "1f109701-52a5-4a26-ae21-415ebc0d21ff" (UID: "1f109701-52a5-4a26-ae21-415ebc0d21ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.401079 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f109701-52a5-4a26-ae21-415ebc0d21ff-kube-api-access-xq84b" (OuterVolumeSpecName: "kube-api-access-xq84b") pod "1f109701-52a5-4a26-ae21-415ebc0d21ff" (UID: "1f109701-52a5-4a26-ae21-415ebc0d21ff"). InnerVolumeSpecName "kube-api-access-xq84b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.423735 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1f109701-52a5-4a26-ae21-415ebc0d21ff" (UID: "1f109701-52a5-4a26-ae21-415ebc0d21ff"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.473574 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f109701-52a5-4a26-ae21-415ebc0d21ff" (UID: "1f109701-52a5-4a26-ae21-415ebc0d21ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.497540 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.497571 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq84b\" (UniqueName: \"kubernetes.io/projected/1f109701-52a5-4a26-ae21-415ebc0d21ff-kube-api-access-xq84b\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.497582 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.497590 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f109701-52a5-4a26-ae21-415ebc0d21ff-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.497601 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.497609 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.514742 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-config-data" (OuterVolumeSpecName: "config-data") pod "1f109701-52a5-4a26-ae21-415ebc0d21ff" (UID: "1f109701-52a5-4a26-ae21-415ebc0d21ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:22 crc kubenswrapper[4996]: I0228 09:21:22.599354 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f109701-52a5-4a26-ae21-415ebc0d21ff-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.046277 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" path="/var/lib/kubelet/pods/64ea5c66-9d4b-4bdd-b71b-b769e0273ae2/volumes" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.193703 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f109701-52a5-4a26-ae21-415ebc0d21ff","Type":"ContainerDied","Data":"9f32cb5e690b997c7348ce1d5ed24cdb4773ddf1478c41290e963673eff60b13"} Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.193758 4996 scope.go:117] "RemoveContainer" containerID="e22715be76497be0a48f352176fb6318b60e2a22c7297c6b086aff63727d492a" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.193784 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.220267 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.234040 4996 scope.go:117] "RemoveContainer" containerID="176e5ae954cc844c181150fbdb7c3fdfcd2881c2cc6c10023d8a60814fb6dcc7" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.236106 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.308859 4996 scope.go:117] "RemoveContainer" containerID="aa346bb74d259903bd3ce9eb8ca54e81f88e7a9d9a3838c86d0809224d6f31f4" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.311963 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:23 crc kubenswrapper[4996]: E0228 09:21:23.312428 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="ceilometer-notification-agent" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.312520 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="ceilometer-notification-agent" Feb 28 09:21:23 crc kubenswrapper[4996]: E0228 09:21:23.312617 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="ceilometer-central-agent" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.312686 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="ceilometer-central-agent" Feb 28 09:21:23 crc kubenswrapper[4996]: E0228 09:21:23.312765 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="proxy-httpd" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.312821 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="proxy-httpd" Feb 28 09:21:23 crc kubenswrapper[4996]: E0228 09:21:23.312873 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="sg-core" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.312927 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="sg-core" Feb 28 09:21:23 crc kubenswrapper[4996]: E0228 09:21:23.312985 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" containerName="init" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.313078 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" containerName="init" Feb 28 09:21:23 crc kubenswrapper[4996]: E0228 09:21:23.313181 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" containerName="dnsmasq-dns" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.313256 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" containerName="dnsmasq-dns" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.313505 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="sg-core" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.313601 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="proxy-httpd" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.313680 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="ceilometer-notification-agent" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.313771 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ea5c66-9d4b-4bdd-b71b-b769e0273ae2" containerName="dnsmasq-dns" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.313932 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" containerName="ceilometer-central-agent" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.316139 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.325208 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.328752 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.329114 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.335810 4996 scope.go:117] "RemoveContainer" containerID="d9e88988b09b9e0d0341a1da6515d87ea59a6c985165eee72521aff574981628" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.358658 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:23 crc kubenswrapper[4996]: E0228 09:21:23.359413 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-m5r9r log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[combined-ca-bundle config-data kube-api-access-m5r9r log-httpd run-httpd scripts sg-core-conf-yaml]: context canceled" pod="openstack/ceilometer-0" podUID="63ba8dd6-8f1c-4099-8043-8ae860fe24fe" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.419814 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-run-httpd\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.419884 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.419947 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-scripts\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.419983 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-log-httpd\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.420063 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5r9r\" (UniqueName: \"kubernetes.io/projected/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-kube-api-access-m5r9r\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.420121 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-config-data\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.420176 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.522192 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5r9r\" (UniqueName: \"kubernetes.io/projected/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-kube-api-access-m5r9r\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.522291 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-config-data\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.522377 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.522408 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-run-httpd\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.522444 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.522479 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-scripts\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.522515 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-log-httpd\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.523438 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-run-httpd\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.523737 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-log-httpd\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.528246 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.533825 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.536746 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-scripts\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.537486 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-config-data\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.547661 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5r9r\" (UniqueName: \"kubernetes.io/projected/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-kube-api-access-m5r9r\") pod \"ceilometer-0\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " pod="openstack/ceilometer-0" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.782816 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-55cc9dfcd4-m6gv8" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 28 09:21:23 crc kubenswrapper[4996]: I0228 09:21:23.907949 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6ccc6bcbc4-2fmz9" podUID="b605afa6-a344-45f0-b62a-56f46b346c52" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.207150 4996 generic.go:334] "Generic (PLEG): container finished" podID="531cd3d1-8618-42d1-88a1-b23b8ca9be62" containerID="14c78f8e5f74c5db001d96c96cd93a4d11bacab40797dd985e28ad02ff86d07c" exitCode=0 Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.207242 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2fdzl" event={"ID":"531cd3d1-8618-42d1-88a1-b23b8ca9be62","Type":"ContainerDied","Data":"14c78f8e5f74c5db001d96c96cd93a4d11bacab40797dd985e28ad02ff86d07c"} Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.210683 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.222157 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.342355 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-combined-ca-bundle\") pod \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.342745 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-log-httpd\") pod \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.343114 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63ba8dd6-8f1c-4099-8043-8ae860fe24fe" (UID: "63ba8dd6-8f1c-4099-8043-8ae860fe24fe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.343340 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-sg-core-conf-yaml\") pod \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.343771 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-scripts\") pod \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.343973 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-config-data\") pod \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.344165 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-run-httpd\") pod \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.344470 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63ba8dd6-8f1c-4099-8043-8ae860fe24fe" (UID: "63ba8dd6-8f1c-4099-8043-8ae860fe24fe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.344743 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5r9r\" (UniqueName: \"kubernetes.io/projected/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-kube-api-access-m5r9r\") pod \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\" (UID: \"63ba8dd6-8f1c-4099-8043-8ae860fe24fe\") " Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.345694 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.345803 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.346667 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63ba8dd6-8f1c-4099-8043-8ae860fe24fe" (UID: "63ba8dd6-8f1c-4099-8043-8ae860fe24fe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.351177 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-config-data" (OuterVolumeSpecName: "config-data") pod "63ba8dd6-8f1c-4099-8043-8ae860fe24fe" (UID: "63ba8dd6-8f1c-4099-8043-8ae860fe24fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.351313 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-scripts" (OuterVolumeSpecName: "scripts") pod "63ba8dd6-8f1c-4099-8043-8ae860fe24fe" (UID: "63ba8dd6-8f1c-4099-8043-8ae860fe24fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.366163 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-kube-api-access-m5r9r" (OuterVolumeSpecName: "kube-api-access-m5r9r") pod "63ba8dd6-8f1c-4099-8043-8ae860fe24fe" (UID: "63ba8dd6-8f1c-4099-8043-8ae860fe24fe"). InnerVolumeSpecName "kube-api-access-m5r9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.371100 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63ba8dd6-8f1c-4099-8043-8ae860fe24fe" (UID: "63ba8dd6-8f1c-4099-8043-8ae860fe24fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.447221 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5r9r\" (UniqueName: \"kubernetes.io/projected/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-kube-api-access-m5r9r\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.447248 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.447257 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.447266 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:24 crc kubenswrapper[4996]: I0228 09:21:24.447274 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ba8dd6-8f1c-4099-8043-8ae860fe24fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.045069 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f109701-52a5-4a26-ae21-415ebc0d21ff" path="/var/lib/kubelet/pods/1f109701-52a5-4a26-ae21-415ebc0d21ff/volumes" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.219365 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.268557 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.276794 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.300226 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.302124 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.304926 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.305116 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.312976 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.366064 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-log-httpd\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.366096 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-run-httpd\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.366130 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.366172 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn7ht\" (UniqueName: \"kubernetes.io/projected/add6b806-1241-4e02-9959-0dd147b74abe-kube-api-access-hn7ht\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.366199 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-config-data\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.366307 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.366338 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-scripts\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.467391 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-scripts\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.467772 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-run-httpd\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.467802 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-log-httpd\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.468046 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.468208 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn7ht\" (UniqueName: \"kubernetes.io/projected/add6b806-1241-4e02-9959-0dd147b74abe-kube-api-access-hn7ht\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.468297 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-config-data\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.468487 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.474640 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-log-httpd\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.475456 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-run-httpd\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.478469 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.480192 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-config-data\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.480276 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.486354 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-scripts\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.488423 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn7ht\" (UniqueName: \"kubernetes.io/projected/add6b806-1241-4e02-9959-0dd147b74abe-kube-api-access-hn7ht\") pod \"ceilometer-0\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: E0228 09:21:25.491663 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb0401da_7bc1_4203_bdfb_2a06deade35b.slice/crio-conmon-a2f4f879a94ae89682e11bd2bcdf5ab11880c35ef1c3ac08f474201f62261270.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb0401da_7bc1_4203_bdfb_2a06deade35b.slice/crio-a2f4f879a94ae89682e11bd2bcdf5ab11880c35ef1c3ac08f474201f62261270.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.604934 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.620404 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.671696 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-db-sync-config-data\") pod \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.671788 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh5n8\" (UniqueName: \"kubernetes.io/projected/531cd3d1-8618-42d1-88a1-b23b8ca9be62-kube-api-access-wh5n8\") pod \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.672043 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-combined-ca-bundle\") pod \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\" (UID: \"531cd3d1-8618-42d1-88a1-b23b8ca9be62\") " Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.678060 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/531cd3d1-8618-42d1-88a1-b23b8ca9be62-kube-api-access-wh5n8" (OuterVolumeSpecName: "kube-api-access-wh5n8") pod "531cd3d1-8618-42d1-88a1-b23b8ca9be62" (UID: "531cd3d1-8618-42d1-88a1-b23b8ca9be62"). InnerVolumeSpecName "kube-api-access-wh5n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.679399 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "531cd3d1-8618-42d1-88a1-b23b8ca9be62" (UID: "531cd3d1-8618-42d1-88a1-b23b8ca9be62"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.696884 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "531cd3d1-8618-42d1-88a1-b23b8ca9be62" (UID: "531cd3d1-8618-42d1-88a1-b23b8ca9be62"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.776518 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh5n8\" (UniqueName: \"kubernetes.io/projected/531cd3d1-8618-42d1-88a1-b23b8ca9be62-kube-api-access-wh5n8\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.776768 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:25 crc kubenswrapper[4996]: I0228 09:21:25.776780 4996 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/531cd3d1-8618-42d1-88a1-b23b8ca9be62-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.051276 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:26 crc kubenswrapper[4996]: W0228 09:21:26.052561 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadd6b806_1241_4e02_9959_0dd147b74abe.slice/crio-e48c4c1073e9c15213ec9134be1c22ea331918604c82ed247a83877e4f13c0dc WatchSource:0}: Error finding container e48c4c1073e9c15213ec9134be1c22ea331918604c82ed247a83877e4f13c0dc: Status 404 returned error can't find the container with id e48c4c1073e9c15213ec9134be1c22ea331918604c82ed247a83877e4f13c0dc Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.232379 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2fdzl" event={"ID":"531cd3d1-8618-42d1-88a1-b23b8ca9be62","Type":"ContainerDied","Data":"0d967454fa20139efabfd717f20901816644c923bc56397507673464440d12b9"} Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.232689 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d967454fa20139efabfd717f20901816644c923bc56397507673464440d12b9" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.232401 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2fdzl" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.234867 4996 generic.go:334] "Generic (PLEG): container finished" podID="db0401da-7bc1-4203-bdfb-2a06deade35b" containerID="a2f4f879a94ae89682e11bd2bcdf5ab11880c35ef1c3ac08f474201f62261270" exitCode=0 Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.234935 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kplwp" event={"ID":"db0401da-7bc1-4203-bdfb-2a06deade35b","Type":"ContainerDied","Data":"a2f4f879a94ae89682e11bd2bcdf5ab11880c35ef1c3ac08f474201f62261270"} Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.237101 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerStarted","Data":"e48c4c1073e9c15213ec9134be1c22ea331918604c82ed247a83877e4f13c0dc"} Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.472605 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7457c94496-jvp8b"] Feb 28 09:21:26 crc kubenswrapper[4996]: E0228 09:21:26.472946 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="531cd3d1-8618-42d1-88a1-b23b8ca9be62" containerName="barbican-db-sync" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.472964 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="531cd3d1-8618-42d1-88a1-b23b8ca9be62" containerName="barbican-db-sync" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.473140 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="531cd3d1-8618-42d1-88a1-b23b8ca9be62" containerName="barbican-db-sync" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.473990 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.481781 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.481950 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qhlll" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.485269 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.505725 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-665ffb6dc-7h4gw"] Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.507132 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.517044 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.531209 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7457c94496-jvp8b"] Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.539058 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-665ffb6dc-7h4gw"] Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.591851 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-combined-ca-bundle\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.591964 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-config-data\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.591984 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d19959-1945-43a8-b005-f4f136fcdf10-logs\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.592051 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-config-data-custom\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.592088 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8wxt\" (UniqueName: \"kubernetes.io/projected/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-kube-api-access-w8wxt\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.592105 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-config-data\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.592129 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-combined-ca-bundle\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.592157 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-config-data-custom\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.592174 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-logs\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.592191 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kktn\" (UniqueName: \"kubernetes.io/projected/d4d19959-1945-43a8-b005-f4f136fcdf10-kube-api-access-6kktn\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.631367 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-hff69"] Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.632775 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.643456 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-hff69"] Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.672228 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5d4548f44b-9pcrx"] Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.673503 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.676094 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693554 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693620 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8wxt\" (UniqueName: \"kubernetes.io/projected/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-kube-api-access-w8wxt\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693650 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-config-data\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693674 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zft65\" (UniqueName: \"kubernetes.io/projected/5466330c-a12f-4f77-a90e-d731ad3282eb-kube-api-access-zft65\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693713 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-combined-ca-bundle\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693750 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-config-data-custom\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693775 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-logs\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693797 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kktn\" (UniqueName: \"kubernetes.io/projected/d4d19959-1945-43a8-b005-f4f136fcdf10-kube-api-access-6kktn\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693815 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-combined-ca-bundle\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693841 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693892 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-config-data\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.693910 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d19959-1945-43a8-b005-f4f136fcdf10-logs\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.694015 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-config\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.694043 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-config-data-custom\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.694072 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-dns-svc\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.695123 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d4548f44b-9pcrx"] Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.702076 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4d19959-1945-43a8-b005-f4f136fcdf10-logs\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.703614 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-logs\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.705142 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-combined-ca-bundle\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.706960 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-config-data-custom\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.707717 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-config-data-custom\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.708821 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-config-data\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.709310 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-combined-ca-bundle\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.712411 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4d19959-1945-43a8-b005-f4f136fcdf10-config-data\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.721624 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kktn\" (UniqueName: \"kubernetes.io/projected/d4d19959-1945-43a8-b005-f4f136fcdf10-kube-api-access-6kktn\") pod \"barbican-keystone-listener-7457c94496-jvp8b\" (UID: \"d4d19959-1945-43a8-b005-f4f136fcdf10\") " pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.738456 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8wxt\" (UniqueName: \"kubernetes.io/projected/85fcdb36-feb8-4f2f-a91e-ffbce6e91d04-kube-api-access-w8wxt\") pod \"barbican-worker-665ffb6dc-7h4gw\" (UID: \"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04\") " pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.795856 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbrnr\" (UniqueName: \"kubernetes.io/projected/47bd376f-eac1-4b4d-b765-24ea791625ae-kube-api-access-dbrnr\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.795917 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-config\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.795961 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-dns-svc\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.795992 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.796044 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zft65\" (UniqueName: \"kubernetes.io/projected/5466330c-a12f-4f77-a90e-d731ad3282eb-kube-api-access-zft65\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.796065 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data-custom\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.796086 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-combined-ca-bundle\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.796121 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47bd376f-eac1-4b4d-b765-24ea791625ae-logs\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.796161 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.796187 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.796714 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-config\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.797524 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.797910 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.798269 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-dns-svc\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.817636 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zft65\" (UniqueName: \"kubernetes.io/projected/5466330c-a12f-4f77-a90e-d731ad3282eb-kube-api-access-zft65\") pod \"dnsmasq-dns-6bb684768f-hff69\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.898210 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47bd376f-eac1-4b4d-b765-24ea791625ae-logs\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.898363 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.898469 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbrnr\" (UniqueName: \"kubernetes.io/projected/47bd376f-eac1-4b4d-b765-24ea791625ae-kube-api-access-dbrnr\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.898581 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data-custom\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.898616 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-combined-ca-bundle\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.899936 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47bd376f-eac1-4b4d-b765-24ea791625ae-logs\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.904658 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-combined-ca-bundle\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.905209 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data-custom\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.906395 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.916655 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbrnr\" (UniqueName: \"kubernetes.io/projected/47bd376f-eac1-4b4d-b765-24ea791625ae-kube-api-access-dbrnr\") pod \"barbican-api-5d4548f44b-9pcrx\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.926831 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.954543 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-665ffb6dc-7h4gw" Feb 28 09:21:26 crc kubenswrapper[4996]: I0228 09:21:26.967141 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.007717 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.079922 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ba8dd6-8f1c-4099-8043-8ae860fe24fe" path="/var/lib/kubelet/pods/63ba8dd6-8f1c-4099-8043-8ae860fe24fe/volumes" Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.254900 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerStarted","Data":"73c8e71acfa6f183593e97ea7caafc7f6a9df02c024990653e208621b241da19"} Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.449041 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7457c94496-jvp8b"] Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.554526 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-665ffb6dc-7h4gw"] Feb 28 09:21:27 crc kubenswrapper[4996]: W0228 09:21:27.569614 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85fcdb36_feb8_4f2f_a91e_ffbce6e91d04.slice/crio-9ecc6821c1389733f0d0e27bea8c735faad7c8f188a4191e47bca794b3df08d0 WatchSource:0}: Error finding container 9ecc6821c1389733f0d0e27bea8c735faad7c8f188a4191e47bca794b3df08d0: Status 404 returned error can't find the container with id 9ecc6821c1389733f0d0e27bea8c735faad7c8f188a4191e47bca794b3df08d0 Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.648551 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5d4548f44b-9pcrx"] Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.788990 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-hff69"] Feb 28 09:21:27 crc kubenswrapper[4996]: W0228 09:21:27.820025 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5466330c_a12f_4f77_a90e_d731ad3282eb.slice/crio-12d4765513d7fa934797b7bd0f21dbcf4801653851020a8eb472ea8709785df2 WatchSource:0}: Error finding container 12d4765513d7fa934797b7bd0f21dbcf4801653851020a8eb472ea8709785df2: Status 404 returned error can't find the container with id 12d4765513d7fa934797b7bd0f21dbcf4801653851020a8eb472ea8709785df2 Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.842129 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kplwp" Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.936224 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tgfv\" (UniqueName: \"kubernetes.io/projected/db0401da-7bc1-4203-bdfb-2a06deade35b-kube-api-access-7tgfv\") pod \"db0401da-7bc1-4203-bdfb-2a06deade35b\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.936271 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-config-data\") pod \"db0401da-7bc1-4203-bdfb-2a06deade35b\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.936335 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-combined-ca-bundle\") pod \"db0401da-7bc1-4203-bdfb-2a06deade35b\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.936634 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-scripts\") pod \"db0401da-7bc1-4203-bdfb-2a06deade35b\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.936702 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-db-sync-config-data\") pod \"db0401da-7bc1-4203-bdfb-2a06deade35b\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.936785 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db0401da-7bc1-4203-bdfb-2a06deade35b-etc-machine-id\") pod \"db0401da-7bc1-4203-bdfb-2a06deade35b\" (UID: \"db0401da-7bc1-4203-bdfb-2a06deade35b\") " Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.937435 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db0401da-7bc1-4203-bdfb-2a06deade35b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "db0401da-7bc1-4203-bdfb-2a06deade35b" (UID: "db0401da-7bc1-4203-bdfb-2a06deade35b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.940333 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-scripts" (OuterVolumeSpecName: "scripts") pod "db0401da-7bc1-4203-bdfb-2a06deade35b" (UID: "db0401da-7bc1-4203-bdfb-2a06deade35b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.940342 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0401da-7bc1-4203-bdfb-2a06deade35b-kube-api-access-7tgfv" (OuterVolumeSpecName: "kube-api-access-7tgfv") pod "db0401da-7bc1-4203-bdfb-2a06deade35b" (UID: "db0401da-7bc1-4203-bdfb-2a06deade35b"). InnerVolumeSpecName "kube-api-access-7tgfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.941307 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "db0401da-7bc1-4203-bdfb-2a06deade35b" (UID: "db0401da-7bc1-4203-bdfb-2a06deade35b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:27 crc kubenswrapper[4996]: I0228 09:21:27.981094 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db0401da-7bc1-4203-bdfb-2a06deade35b" (UID: "db0401da-7bc1-4203-bdfb-2a06deade35b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.007918 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-config-data" (OuterVolumeSpecName: "config-data") pod "db0401da-7bc1-4203-bdfb-2a06deade35b" (UID: "db0401da-7bc1-4203-bdfb-2a06deade35b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.038682 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.038717 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.038730 4996 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.038741 4996 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/db0401da-7bc1-4203-bdfb-2a06deade35b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.038751 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tgfv\" (UniqueName: \"kubernetes.io/projected/db0401da-7bc1-4203-bdfb-2a06deade35b-kube-api-access-7tgfv\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.038762 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db0401da-7bc1-4203-bdfb-2a06deade35b-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.268205 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" event={"ID":"d4d19959-1945-43a8-b005-f4f136fcdf10","Type":"ContainerStarted","Data":"215f106f0d9c551026adfc5b381d3af159e70c58769914ebae4d4a1bbbe0f33b"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.270499 4996 generic.go:334] "Generic (PLEG): container finished" podID="5466330c-a12f-4f77-a90e-d731ad3282eb" containerID="6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a" exitCode=0 Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.270642 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-hff69" event={"ID":"5466330c-a12f-4f77-a90e-d731ad3282eb","Type":"ContainerDied","Data":"6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.270738 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-hff69" event={"ID":"5466330c-a12f-4f77-a90e-d731ad3282eb","Type":"ContainerStarted","Data":"12d4765513d7fa934797b7bd0f21dbcf4801653851020a8eb472ea8709785df2"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.287306 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-kplwp" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.289402 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-kplwp" event={"ID":"db0401da-7bc1-4203-bdfb-2a06deade35b","Type":"ContainerDied","Data":"c23bcf07c830d8e6f1ac79f0f7649c2aed8523686f2314fb930edff2eaa6240e"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.289485 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c23bcf07c830d8e6f1ac79f0f7649c2aed8523686f2314fb930edff2eaa6240e" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.311554 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerStarted","Data":"9c475fe616c7c23a6828208b2ea6a8403a9427ba587656702733dc500feac909"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.311614 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerStarted","Data":"060f4ca35f7c41a8299b539abbc132581de1b121486fbd1bf8491538f5b2a3b7"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.314029 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-665ffb6dc-7h4gw" event={"ID":"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04","Type":"ContainerStarted","Data":"9ecc6821c1389733f0d0e27bea8c735faad7c8f188a4191e47bca794b3df08d0"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.318168 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4548f44b-9pcrx" event={"ID":"47bd376f-eac1-4b4d-b765-24ea791625ae","Type":"ContainerStarted","Data":"f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.318214 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4548f44b-9pcrx" event={"ID":"47bd376f-eac1-4b4d-b765-24ea791625ae","Type":"ContainerStarted","Data":"afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.318225 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4548f44b-9pcrx" event={"ID":"47bd376f-eac1-4b4d-b765-24ea791625ae","Type":"ContainerStarted","Data":"87e609050aa58f90b844fc1c73023b718c5b0a35359d2ca2d82924956faa49a6"} Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.318957 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.349531 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5d4548f44b-9pcrx" podStartSLOduration=2.34950323 podStartE2EDuration="2.34950323s" podCreationTimestamp="2026-02-28 09:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:28.335531892 +0000 UTC m=+1252.026334703" watchObservedRunningTime="2026-02-28 09:21:28.34950323 +0000 UTC m=+1252.040306041" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.495592 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:28 crc kubenswrapper[4996]: E0228 09:21:28.496065 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0401da-7bc1-4203-bdfb-2a06deade35b" containerName="cinder-db-sync" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.496083 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0401da-7bc1-4203-bdfb-2a06deade35b" containerName="cinder-db-sync" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.496276 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0401da-7bc1-4203-bdfb-2a06deade35b" containerName="cinder-db-sync" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.497443 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.504041 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.504602 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.504742 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.504932 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-8nlr8" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.547246 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.659130 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.659197 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.659232 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsdd\" (UniqueName: \"kubernetes.io/projected/66d086f8-902e-46c4-ba7f-bb6549f618ac-kube-api-access-tvsdd\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.659300 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66d086f8-902e-46c4-ba7f-bb6549f618ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.659381 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.659438 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.762492 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.762544 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.762580 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.762602 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.762630 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsdd\" (UniqueName: \"kubernetes.io/projected/66d086f8-902e-46c4-ba7f-bb6549f618ac-kube-api-access-tvsdd\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.762677 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66d086f8-902e-46c4-ba7f-bb6549f618ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.762782 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66d086f8-902e-46c4-ba7f-bb6549f618ac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.782622 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.793572 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-scripts\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.794596 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.795902 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.844749 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsdd\" (UniqueName: \"kubernetes.io/projected/66d086f8-902e-46c4-ba7f-bb6549f618ac-kube-api-access-tvsdd\") pod \"cinder-scheduler-0\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.911794 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-hff69"] Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.914475 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.948996 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-2kbxj"] Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.951170 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.970236 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-2kbxj"] Feb 28 09:21:28 crc kubenswrapper[4996]: I0228 09:21:28.992637 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:28.994133 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.006974 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.013626 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.073741 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7df7a097-1a0b-4b81-b569-e9c0e28b7861-logs\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.073824 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.073849 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.073883 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-config\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.073905 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.073937 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-scripts\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.073979 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgnh\" (UniqueName: \"kubernetes.io/projected/7df7a097-1a0b-4b81-b569-e9c0e28b7861-kube-api-access-qtgnh\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.073999 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.074077 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data-custom\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.074097 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.074126 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7df7a097-1a0b-4b81-b569-e9c0e28b7861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.074156 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4fvh\" (UniqueName: \"kubernetes.io/projected/36aae0d9-72c5-4af8-9455-950962baeb28-kube-api-access-m4fvh\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.175724 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.175785 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.175856 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-config\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.175887 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.175933 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-scripts\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.175952 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgnh\" (UniqueName: \"kubernetes.io/projected/7df7a097-1a0b-4b81-b569-e9c0e28b7861-kube-api-access-qtgnh\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.175977 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.176048 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data-custom\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.176072 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.176106 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7df7a097-1a0b-4b81-b569-e9c0e28b7861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.176143 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4fvh\" (UniqueName: \"kubernetes.io/projected/36aae0d9-72c5-4af8-9455-950962baeb28-kube-api-access-m4fvh\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.176183 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7df7a097-1a0b-4b81-b569-e9c0e28b7861-logs\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.176668 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7df7a097-1a0b-4b81-b569-e9c0e28b7861-logs\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.177687 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.178388 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-config\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.179209 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.179308 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.180548 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7df7a097-1a0b-4b81-b569-e9c0e28b7861-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.182745 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.183782 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.195645 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-scripts\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.197908 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data-custom\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.198605 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgnh\" (UniqueName: \"kubernetes.io/projected/7df7a097-1a0b-4b81-b569-e9c0e28b7861-kube-api-access-qtgnh\") pod \"cinder-api-0\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " pod="openstack/cinder-api-0" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.202557 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4fvh\" (UniqueName: \"kubernetes.io/projected/36aae0d9-72c5-4af8-9455-950962baeb28-kube-api-access-m4fvh\") pod \"dnsmasq-dns-6d97fcdd8f-2kbxj\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.335498 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.349987 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-hff69" event={"ID":"5466330c-a12f-4f77-a90e-d731ad3282eb","Type":"ContainerStarted","Data":"64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615"} Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.350075 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.350133 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:29 crc kubenswrapper[4996]: I0228 09:21:29.350361 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.378895 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-hff69" podUID="5466330c-a12f-4f77-a90e-d731ad3282eb" containerName="dnsmasq-dns" containerID="cri-o://64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615" gracePeriod=10 Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.676119 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-hff69" podStartSLOduration=4.676096858 podStartE2EDuration="4.676096858s" podCreationTimestamp="2026-02-28 09:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:29.369626672 +0000 UTC m=+1253.060429483" watchObservedRunningTime="2026-02-28 09:21:30.676096858 +0000 UTC m=+1254.366899669" Feb 28 09:21:30 crc kubenswrapper[4996]: W0228 09:21:30.688470 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66d086f8_902e_46c4_ba7f_bb6549f618ac.slice/crio-86d826ac5bc66d4e09e764bd779c0efdb0232855e37a208aa540782c3b13342e WatchSource:0}: Error finding container 86d826ac5bc66d4e09e764bd779c0efdb0232855e37a208aa540782c3b13342e: Status 404 returned error can't find the container with id 86d826ac5bc66d4e09e764bd779c0efdb0232855e37a208aa540782c3b13342e Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.688614 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.748424 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:30 crc kubenswrapper[4996]: W0228 09:21:30.778724 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7df7a097_1a0b_4b81_b569_e9c0e28b7861.slice/crio-34094ad983c96e0ec48bf43d913d6f8e7fd30fb84105339355554199ac2c9992 WatchSource:0}: Error finding container 34094ad983c96e0ec48bf43d913d6f8e7fd30fb84105339355554199ac2c9992: Status 404 returned error can't find the container with id 34094ad983c96e0ec48bf43d913d6f8e7fd30fb84105339355554199ac2c9992 Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.830899 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-2kbxj"] Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.855248 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.927390 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zft65\" (UniqueName: \"kubernetes.io/projected/5466330c-a12f-4f77-a90e-d731ad3282eb-kube-api-access-zft65\") pod \"5466330c-a12f-4f77-a90e-d731ad3282eb\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.927508 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-config\") pod \"5466330c-a12f-4f77-a90e-d731ad3282eb\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.927555 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-dns-svc\") pod \"5466330c-a12f-4f77-a90e-d731ad3282eb\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.927725 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-nb\") pod \"5466330c-a12f-4f77-a90e-d731ad3282eb\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.927767 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-sb\") pod \"5466330c-a12f-4f77-a90e-d731ad3282eb\" (UID: \"5466330c-a12f-4f77-a90e-d731ad3282eb\") " Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.931151 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5466330c-a12f-4f77-a90e-d731ad3282eb-kube-api-access-zft65" (OuterVolumeSpecName: "kube-api-access-zft65") pod "5466330c-a12f-4f77-a90e-d731ad3282eb" (UID: "5466330c-a12f-4f77-a90e-d731ad3282eb"). InnerVolumeSpecName "kube-api-access-zft65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.986540 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5466330c-a12f-4f77-a90e-d731ad3282eb" (UID: "5466330c-a12f-4f77-a90e-d731ad3282eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:30 crc kubenswrapper[4996]: I0228 09:21:30.997231 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-config" (OuterVolumeSpecName: "config") pod "5466330c-a12f-4f77-a90e-d731ad3282eb" (UID: "5466330c-a12f-4f77-a90e-d731ad3282eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.007503 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5466330c-a12f-4f77-a90e-d731ad3282eb" (UID: "5466330c-a12f-4f77-a90e-d731ad3282eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.019862 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5466330c-a12f-4f77-a90e-d731ad3282eb" (UID: "5466330c-a12f-4f77-a90e-d731ad3282eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.030921 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.030966 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.031030 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zft65\" (UniqueName: \"kubernetes.io/projected/5466330c-a12f-4f77-a90e-d731ad3282eb-kube-api-access-zft65\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.031047 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.031059 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5466330c-a12f-4f77-a90e-d731ad3282eb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.406887 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" event={"ID":"d4d19959-1945-43a8-b005-f4f136fcdf10","Type":"ContainerStarted","Data":"b209d05991f5bdfcc09869c796a5af7cd3af2551f1d4e58a07adb52c09415756"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.407304 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" event={"ID":"d4d19959-1945-43a8-b005-f4f136fcdf10","Type":"ContainerStarted","Data":"086915bb17790beb6ac98575116a3458778cce4902d6d93e598da20a57b8d4f0"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.410873 4996 generic.go:334] "Generic (PLEG): container finished" podID="5466330c-a12f-4f77-a90e-d731ad3282eb" containerID="64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615" exitCode=0 Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.410937 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-hff69" event={"ID":"5466330c-a12f-4f77-a90e-d731ad3282eb","Type":"ContainerDied","Data":"64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.410960 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-hff69" event={"ID":"5466330c-a12f-4f77-a90e-d731ad3282eb","Type":"ContainerDied","Data":"12d4765513d7fa934797b7bd0f21dbcf4801653851020a8eb472ea8709785df2"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.410982 4996 scope.go:117] "RemoveContainer" containerID="64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.411131 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-hff69" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.427395 4996 generic.go:334] "Generic (PLEG): container finished" podID="36aae0d9-72c5-4af8-9455-950962baeb28" containerID="fe4464f0597bc1cc7aed7d0b2a3320643c9d2f0e805e34f9b666fecd4e233eae" exitCode=0 Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.428121 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" event={"ID":"36aae0d9-72c5-4af8-9455-950962baeb28","Type":"ContainerDied","Data":"fe4464f0597bc1cc7aed7d0b2a3320643c9d2f0e805e34f9b666fecd4e233eae"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.428152 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" event={"ID":"36aae0d9-72c5-4af8-9455-950962baeb28","Type":"ContainerStarted","Data":"64ef11a14a7ff8cefce3cebba908b254626a1b1bce20bba9b276db3393d9b419"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.448840 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerStarted","Data":"b720c2e1fcdafeecae7d910021262c3d1750fef406a76cac4d0493f1ce2d67c9"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.449660 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.454978 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7df7a097-1a0b-4b81-b569-e9c0e28b7861","Type":"ContainerStarted","Data":"34094ad983c96e0ec48bf43d913d6f8e7fd30fb84105339355554199ac2c9992"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.457658 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7457c94496-jvp8b" podStartSLOduration=2.710555163 podStartE2EDuration="5.457635493s" podCreationTimestamp="2026-02-28 09:21:26 +0000 UTC" firstStartedPulling="2026-02-28 09:21:27.438324536 +0000 UTC m=+1251.129127347" lastFinishedPulling="2026-02-28 09:21:30.185404876 +0000 UTC m=+1253.876207677" observedRunningTime="2026-02-28 09:21:31.427699078 +0000 UTC m=+1255.118501889" watchObservedRunningTime="2026-02-28 09:21:31.457635493 +0000 UTC m=+1255.148438324" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.457931 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66d086f8-902e-46c4-ba7f-bb6549f618ac","Type":"ContainerStarted","Data":"86d826ac5bc66d4e09e764bd779c0efdb0232855e37a208aa540782c3b13342e"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.486352 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.359407736 podStartE2EDuration="6.486329088s" podCreationTimestamp="2026-02-28 09:21:25 +0000 UTC" firstStartedPulling="2026-02-28 09:21:26.0562809 +0000 UTC m=+1249.747083721" lastFinishedPulling="2026-02-28 09:21:30.183202262 +0000 UTC m=+1253.874005073" observedRunningTime="2026-02-28 09:21:31.479735698 +0000 UTC m=+1255.170538509" watchObservedRunningTime="2026-02-28 09:21:31.486329088 +0000 UTC m=+1255.177131909" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.512754 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-665ffb6dc-7h4gw" event={"ID":"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04","Type":"ContainerStarted","Data":"d73800f39076f1a2ac5fc6a1a9faf9de2c57f1244cfc9092e3805b2de67db3e8"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.512812 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-665ffb6dc-7h4gw" event={"ID":"85fcdb36-feb8-4f2f-a91e-ffbce6e91d04","Type":"ContainerStarted","Data":"182d0e5cebaabc32f5968752dc381f3a4107e11a8881fff94f3812e6da7deea7"} Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.536832 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-665ffb6dc-7h4gw" podStartSLOduration=2.95451605 podStartE2EDuration="5.53681143s" podCreationTimestamp="2026-02-28 09:21:26 +0000 UTC" firstStartedPulling="2026-02-28 09:21:27.571772197 +0000 UTC m=+1251.262575008" lastFinishedPulling="2026-02-28 09:21:30.154067577 +0000 UTC m=+1253.844870388" observedRunningTime="2026-02-28 09:21:31.529829951 +0000 UTC m=+1255.220632762" watchObservedRunningTime="2026-02-28 09:21:31.53681143 +0000 UTC m=+1255.227614241" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.615913 4996 scope.go:117] "RemoveContainer" containerID="6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.637122 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-hff69"] Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.661453 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-hff69"] Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.716288 4996 scope.go:117] "RemoveContainer" containerID="64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615" Feb 28 09:21:31 crc kubenswrapper[4996]: E0228 09:21:31.722592 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615\": container with ID starting with 64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615 not found: ID does not exist" containerID="64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.722630 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615"} err="failed to get container status \"64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615\": rpc error: code = NotFound desc = could not find container \"64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615\": container with ID starting with 64ff52083c4e45836a1ea5f68f91428a79684e1d497b6891ce40082db5778615 not found: ID does not exist" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.722654 4996 scope.go:117] "RemoveContainer" containerID="6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a" Feb 28 09:21:31 crc kubenswrapper[4996]: E0228 09:21:31.723294 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a\": container with ID starting with 6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a not found: ID does not exist" containerID="6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a" Feb 28 09:21:31 crc kubenswrapper[4996]: I0228 09:21:31.723321 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a"} err="failed to get container status \"6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a\": rpc error: code = NotFound desc = could not find container \"6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a\": container with ID starting with 6eb38c587552c341c1cd261cbf6d353c1d987c32ad22bbf52a335984f848865a not found: ID does not exist" Feb 28 09:21:32 crc kubenswrapper[4996]: I0228 09:21:32.557283 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66d086f8-902e-46c4-ba7f-bb6549f618ac","Type":"ContainerStarted","Data":"a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f"} Feb 28 09:21:32 crc kubenswrapper[4996]: I0228 09:21:32.593250 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" event={"ID":"36aae0d9-72c5-4af8-9455-950962baeb28","Type":"ContainerStarted","Data":"158595799c0ca17e54eb33f536b1bf532e3b707aa9ed6395879ad074acf10061"} Feb 28 09:21:32 crc kubenswrapper[4996]: I0228 09:21:32.594637 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:32 crc kubenswrapper[4996]: I0228 09:21:32.603139 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7df7a097-1a0b-4b81-b569-e9c0e28b7861","Type":"ContainerStarted","Data":"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7"} Feb 28 09:21:32 crc kubenswrapper[4996]: I0228 09:21:32.623966 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" podStartSLOduration=4.623950085 podStartE2EDuration="4.623950085s" podCreationTimestamp="2026-02-28 09:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:32.623196507 +0000 UTC m=+1256.313999318" watchObservedRunningTime="2026-02-28 09:21:32.623950085 +0000 UTC m=+1256.314752906" Feb 28 09:21:32 crc kubenswrapper[4996]: I0228 09:21:32.764093 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.057920 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5466330c-a12f-4f77-a90e-d731ad3282eb" path="/var/lib/kubelet/pods/5466330c-a12f-4f77-a90e-d731ad3282eb/volumes" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.546372 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58dc667d-krgck"] Feb 28 09:21:33 crc kubenswrapper[4996]: E0228 09:21:33.546794 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5466330c-a12f-4f77-a90e-d731ad3282eb" containerName="init" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.546814 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5466330c-a12f-4f77-a90e-d731ad3282eb" containerName="init" Feb 28 09:21:33 crc kubenswrapper[4996]: E0228 09:21:33.546834 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5466330c-a12f-4f77-a90e-d731ad3282eb" containerName="dnsmasq-dns" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.546846 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5466330c-a12f-4f77-a90e-d731ad3282eb" containerName="dnsmasq-dns" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.547344 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5466330c-a12f-4f77-a90e-d731ad3282eb" containerName="dnsmasq-dns" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.548505 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.550521 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.550926 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.559711 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58dc667d-krgck"] Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.584491 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-public-tls-certs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.584578 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-combined-ca-bundle\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.584597 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt6nh\" (UniqueName: \"kubernetes.io/projected/ae98f057-6852-4905-a4d6-5b6d121cb4a1-kube-api-access-tt6nh\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.584625 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-config-data-custom\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.584658 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-internal-tls-certs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.584796 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae98f057-6852-4905-a4d6-5b6d121cb4a1-logs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.584916 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-config-data\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.612867 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66d086f8-902e-46c4-ba7f-bb6549f618ac","Type":"ContainerStarted","Data":"a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405"} Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.615951 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7df7a097-1a0b-4b81-b569-e9c0e28b7861","Type":"ContainerStarted","Data":"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87"} Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.631668 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.993554045 podStartE2EDuration="5.631654237s" podCreationTimestamp="2026-02-28 09:21:28 +0000 UTC" firstStartedPulling="2026-02-28 09:21:30.69105506 +0000 UTC m=+1254.381857871" lastFinishedPulling="2026-02-28 09:21:31.329155252 +0000 UTC m=+1255.019958063" observedRunningTime="2026-02-28 09:21:33.63054674 +0000 UTC m=+1257.321349551" watchObservedRunningTime="2026-02-28 09:21:33.631654237 +0000 UTC m=+1257.322457048" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.666281 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.666262875 podStartE2EDuration="5.666262875s" podCreationTimestamp="2026-02-28 09:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:33.650943424 +0000 UTC m=+1257.341746235" watchObservedRunningTime="2026-02-28 09:21:33.666262875 +0000 UTC m=+1257.357065676" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.686818 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-public-tls-certs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.687634 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-combined-ca-bundle\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.687660 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt6nh\" (UniqueName: \"kubernetes.io/projected/ae98f057-6852-4905-a4d6-5b6d121cb4a1-kube-api-access-tt6nh\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.687692 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-config-data-custom\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.688319 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-internal-tls-certs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.688404 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae98f057-6852-4905-a4d6-5b6d121cb4a1-logs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.688475 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-config-data\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.688652 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae98f057-6852-4905-a4d6-5b6d121cb4a1-logs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.692612 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-internal-tls-certs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.692765 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-combined-ca-bundle\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.693682 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-config-data\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.694205 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-public-tls-certs\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.704741 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt6nh\" (UniqueName: \"kubernetes.io/projected/ae98f057-6852-4905-a4d6-5b6d121cb4a1-kube-api-access-tt6nh\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.717061 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae98f057-6852-4905-a4d6-5b6d121cb4a1-config-data-custom\") pod \"barbican-api-58dc667d-krgck\" (UID: \"ae98f057-6852-4905-a4d6-5b6d121cb4a1\") " pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.865215 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:33 crc kubenswrapper[4996]: I0228 09:21:33.914979 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 28 09:21:34 crc kubenswrapper[4996]: I0228 09:21:34.309552 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:34 crc kubenswrapper[4996]: I0228 09:21:34.350496 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 28 09:21:34 crc kubenswrapper[4996]: I0228 09:21:34.427607 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58dc667d-krgck"] Feb 28 09:21:34 crc kubenswrapper[4996]: I0228 09:21:34.628112 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc667d-krgck" event={"ID":"ae98f057-6852-4905-a4d6-5b6d121cb4a1","Type":"ContainerStarted","Data":"e8fcb6c56374237f69399a8e581959a585b93f5e66e7d0257e1103ceaa54555a"} Feb 28 09:21:34 crc kubenswrapper[4996]: I0228 09:21:34.628146 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc667d-krgck" event={"ID":"ae98f057-6852-4905-a4d6-5b6d121cb4a1","Type":"ContainerStarted","Data":"b8ea7fd30c6f94e99559a1ba113367b8f3fb50c5d13417f251583f15339eeab7"} Feb 28 09:21:34 crc kubenswrapper[4996]: I0228 09:21:34.628255 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerName="cinder-api-log" containerID="cri-o://5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7" gracePeriod=30 Feb 28 09:21:34 crc kubenswrapper[4996]: I0228 09:21:34.629994 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerName="cinder-api" containerID="cri-o://c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87" gracePeriod=30 Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.394470 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.416824 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-scripts\") pod \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.416923 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7df7a097-1a0b-4b81-b569-e9c0e28b7861-logs\") pod \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.416945 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data-custom\") pod \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.416995 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtgnh\" (UniqueName: \"kubernetes.io/projected/7df7a097-1a0b-4b81-b569-e9c0e28b7861-kube-api-access-qtgnh\") pod \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.417104 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-combined-ca-bundle\") pod \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.417136 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data\") pod \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.417154 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7df7a097-1a0b-4b81-b569-e9c0e28b7861-etc-machine-id\") pod \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\" (UID: \"7df7a097-1a0b-4b81-b569-e9c0e28b7861\") " Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.417568 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7df7a097-1a0b-4b81-b569-e9c0e28b7861-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7df7a097-1a0b-4b81-b569-e9c0e28b7861" (UID: "7df7a097-1a0b-4b81-b569-e9c0e28b7861"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.418791 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df7a097-1a0b-4b81-b569-e9c0e28b7861-logs" (OuterVolumeSpecName: "logs") pod "7df7a097-1a0b-4b81-b569-e9c0e28b7861" (UID: "7df7a097-1a0b-4b81-b569-e9c0e28b7861"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.472154 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7df7a097-1a0b-4b81-b569-e9c0e28b7861" (UID: "7df7a097-1a0b-4b81-b569-e9c0e28b7861"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.472194 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df7a097-1a0b-4b81-b569-e9c0e28b7861-kube-api-access-qtgnh" (OuterVolumeSpecName: "kube-api-access-qtgnh") pod "7df7a097-1a0b-4b81-b569-e9c0e28b7861" (UID: "7df7a097-1a0b-4b81-b569-e9c0e28b7861"). InnerVolumeSpecName "kube-api-access-qtgnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.472197 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-scripts" (OuterVolumeSpecName: "scripts") pod "7df7a097-1a0b-4b81-b569-e9c0e28b7861" (UID: "7df7a097-1a0b-4b81-b569-e9c0e28b7861"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.494984 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7df7a097-1a0b-4b81-b569-e9c0e28b7861" (UID: "7df7a097-1a0b-4b81-b569-e9c0e28b7861"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.519118 4996 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7df7a097-1a0b-4b81-b569-e9c0e28b7861-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.519155 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.519164 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7df7a097-1a0b-4b81-b569-e9c0e28b7861-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.519174 4996 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.519183 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtgnh\" (UniqueName: \"kubernetes.io/projected/7df7a097-1a0b-4b81-b569-e9c0e28b7861-kube-api-access-qtgnh\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.519194 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.536434 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data" (OuterVolumeSpecName: "config-data") pod "7df7a097-1a0b-4b81-b569-e9c0e28b7861" (UID: "7df7a097-1a0b-4b81-b569-e9c0e28b7861"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.620901 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7df7a097-1a0b-4b81-b569-e9c0e28b7861-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.636572 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58dc667d-krgck" event={"ID":"ae98f057-6852-4905-a4d6-5b6d121cb4a1","Type":"ContainerStarted","Data":"b369cd45f150a19b9f79c00920810cef686484511b5c825675688a0aee44c068"} Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.636735 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.638929 4996 generic.go:334] "Generic (PLEG): container finished" podID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerID="c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87" exitCode=0 Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.639017 4996 generic.go:334] "Generic (PLEG): container finished" podID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerID="5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7" exitCode=143 Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.639094 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.639137 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7df7a097-1a0b-4b81-b569-e9c0e28b7861","Type":"ContainerDied","Data":"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87"} Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.639167 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7df7a097-1a0b-4b81-b569-e9c0e28b7861","Type":"ContainerDied","Data":"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7"} Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.639184 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7df7a097-1a0b-4b81-b569-e9c0e28b7861","Type":"ContainerDied","Data":"34094ad983c96e0ec48bf43d913d6f8e7fd30fb84105339355554199ac2c9992"} Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.639205 4996 scope.go:117] "RemoveContainer" containerID="c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.666349 4996 scope.go:117] "RemoveContainer" containerID="5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.674340 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58dc667d-krgck" podStartSLOduration=2.67432105 podStartE2EDuration="2.67432105s" podCreationTimestamp="2026-02-28 09:21:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:35.662912294 +0000 UTC m=+1259.353715115" watchObservedRunningTime="2026-02-28 09:21:35.67432105 +0000 UTC m=+1259.365123871" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.690592 4996 scope.go:117] "RemoveContainer" containerID="c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87" Feb 28 09:21:35 crc kubenswrapper[4996]: E0228 09:21:35.696257 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87\": container with ID starting with c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87 not found: ID does not exist" containerID="c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.696302 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87"} err="failed to get container status \"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87\": rpc error: code = NotFound desc = could not find container \"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87\": container with ID starting with c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87 not found: ID does not exist" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.696329 4996 scope.go:117] "RemoveContainer" containerID="5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7" Feb 28 09:21:35 crc kubenswrapper[4996]: E0228 09:21:35.696725 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7\": container with ID starting with 5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7 not found: ID does not exist" containerID="5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.696755 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7"} err="failed to get container status \"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7\": rpc error: code = NotFound desc = could not find container \"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7\": container with ID starting with 5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7 not found: ID does not exist" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.696774 4996 scope.go:117] "RemoveContainer" containerID="c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.697029 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87"} err="failed to get container status \"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87\": rpc error: code = NotFound desc = could not find container \"c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87\": container with ID starting with c1ad2ddae00e34ad2fe0c23aca6b5387483870393aba30b8294a3d0c96bbab87 not found: ID does not exist" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.697057 4996 scope.go:117] "RemoveContainer" containerID="5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.697750 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7"} err="failed to get container status \"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7\": rpc error: code = NotFound desc = could not find container \"5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7\": container with ID starting with 5fe029945f49488af551ea7e9f2d8181972466c4174916882209795c73ec86e7 not found: ID does not exist" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.697825 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.705318 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.734373 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:35 crc kubenswrapper[4996]: E0228 09:21:35.734738 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerName="cinder-api-log" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.734757 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerName="cinder-api-log" Feb 28 09:21:35 crc kubenswrapper[4996]: E0228 09:21:35.734770 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerName="cinder-api" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.734779 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerName="cinder-api" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.734972 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerName="cinder-api" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.734994 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" containerName="cinder-api-log" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.735859 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.738116 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.738333 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.738509 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.744243 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.824870 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-config-data-custom\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.824991 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f13ff650-58de-4d3f-a56b-f77ef33ddf89-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.825052 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f13ff650-58de-4d3f-a56b-f77ef33ddf89-logs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.825097 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.825135 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.825202 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5jb\" (UniqueName: \"kubernetes.io/projected/f13ff650-58de-4d3f-a56b-f77ef33ddf89-kube-api-access-cj5jb\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.825330 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.825459 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-config-data\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.825535 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-scripts\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.927482 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f13ff650-58de-4d3f-a56b-f77ef33ddf89-logs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.927803 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.927831 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.927848 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj5jb\" (UniqueName: \"kubernetes.io/projected/f13ff650-58de-4d3f-a56b-f77ef33ddf89-kube-api-access-cj5jb\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.927885 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.927929 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-config-data\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.927950 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-scripts\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.927994 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-config-data-custom\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.928055 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f13ff650-58de-4d3f-a56b-f77ef33ddf89-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.928137 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f13ff650-58de-4d3f-a56b-f77ef33ddf89-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.928106 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f13ff650-58de-4d3f-a56b-f77ef33ddf89-logs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.939195 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-public-tls-certs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.939386 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-scripts\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.940532 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.942653 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-config-data-custom\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.945894 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-config-data\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.946412 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f13ff650-58de-4d3f-a56b-f77ef33ddf89-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:35 crc kubenswrapper[4996]: I0228 09:21:35.949096 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj5jb\" (UniqueName: \"kubernetes.io/projected/f13ff650-58de-4d3f-a56b-f77ef33ddf89-kube-api-access-cj5jb\") pod \"cinder-api-0\" (UID: \"f13ff650-58de-4d3f-a56b-f77ef33ddf89\") " pod="openstack/cinder-api-0" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.077495 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.081455 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.308633 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.376362 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.457424 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.638251 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.653308 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.731452 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67b5c7c7f7-mzzc4"] Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.731686 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67b5c7c7f7-mzzc4" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-api" containerID="cri-o://b7a95f3fc3578ddf33e205b07480ce2fe00ba8f0d6f4c709dce74c1bc923a0be" gracePeriod=30 Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.732359 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67b5c7c7f7-mzzc4" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-httpd" containerID="cri-o://541161af47e15e689a7869b89d735ee7e1ec2e404f5bdd2a6c84cafb7ae9f3bf" gracePeriod=30 Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.759448 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.856412 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-679bcc7697-9hs5j"] Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.859548 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.882542 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-679bcc7697-9hs5j"] Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.976491 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-combined-ca-bundle\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.976589 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-public-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.976655 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-internal-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.976682 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-config\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.976773 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-ovndb-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.977139 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lkfh\" (UniqueName: \"kubernetes.io/projected/50e72561-9c77-43f9-8f8d-0c9be05be3f6-kube-api-access-7lkfh\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:36 crc kubenswrapper[4996]: I0228 09:21:36.977405 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-httpd-config\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.052337 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df7a097-1a0b-4b81-b569-e9c0e28b7861" path="/var/lib/kubelet/pods/7df7a097-1a0b-4b81-b569-e9c0e28b7861/volumes" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.079926 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-ovndb-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.079985 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lkfh\" (UniqueName: \"kubernetes.io/projected/50e72561-9c77-43f9-8f8d-0c9be05be3f6-kube-api-access-7lkfh\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.080165 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-httpd-config\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.080244 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-combined-ca-bundle\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.080300 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-public-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.080351 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-internal-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.080380 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-config\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.087076 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-internal-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.089206 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-httpd-config\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.089210 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-ovndb-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.089532 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-public-tls-certs\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.090553 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-config\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.091897 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50e72561-9c77-43f9-8f8d-0c9be05be3f6-combined-ca-bundle\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.106293 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lkfh\" (UniqueName: \"kubernetes.io/projected/50e72561-9c77-43f9-8f8d-0c9be05be3f6-kube-api-access-7lkfh\") pod \"neutron-679bcc7697-9hs5j\" (UID: \"50e72561-9c77-43f9-8f8d-0c9be05be3f6\") " pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.243835 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.694701 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f13ff650-58de-4d3f-a56b-f77ef33ddf89","Type":"ContainerStarted","Data":"cf4dcae5f2f5a49b9adfed83bb320bbd9efcd3690b7a2cbb4bcf383051528cf9"} Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.695642 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f13ff650-58de-4d3f-a56b-f77ef33ddf89","Type":"ContainerStarted","Data":"c0700fe3132aebfd688012de6774be1eaffeb2f3e6bfd8c16eba534f862f1afd"} Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.698425 4996 generic.go:334] "Generic (PLEG): container finished" podID="49585240-7b27-458c-8d70-d23d8326bb94" containerID="541161af47e15e689a7869b89d735ee7e1ec2e404f5bdd2a6c84cafb7ae9f3bf" exitCode=0 Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.699730 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67b5c7c7f7-mzzc4" event={"ID":"49585240-7b27-458c-8d70-d23d8326bb94","Type":"ContainerDied","Data":"541161af47e15e689a7869b89d735ee7e1ec2e404f5bdd2a6c84cafb7ae9f3bf"} Feb 28 09:21:37 crc kubenswrapper[4996]: I0228 09:21:37.920888 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-679bcc7697-9hs5j"] Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.632772 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-67b5c7c7f7-mzzc4" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.153:9696/\": dial tcp 10.217.0.153:9696: connect: connection refused" Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.710575 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f13ff650-58de-4d3f-a56b-f77ef33ddf89","Type":"ContainerStarted","Data":"de91e31864c30afb79616eef15a50149a3c656ac4a05131729bc99f2f4e58cdd"} Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.711810 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.714885 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679bcc7697-9hs5j" event={"ID":"50e72561-9c77-43f9-8f8d-0c9be05be3f6","Type":"ContainerStarted","Data":"cff47a7185fe3f5e44b4bce21f4602d314abb8b61d844170255a6a083922828d"} Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.715096 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679bcc7697-9hs5j" event={"ID":"50e72561-9c77-43f9-8f8d-0c9be05be3f6","Type":"ContainerStarted","Data":"64e5e83f060bb036cbfa239be3cf86a21a8854971b505393ca68c7a9476cbeba"} Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.715180 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-679bcc7697-9hs5j" event={"ID":"50e72561-9c77-43f9-8f8d-0c9be05be3f6","Type":"ContainerStarted","Data":"e285750629accdf5da2ee6b3ad1c233335a03ea7980c4ac36a5821421feeae9a"} Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.715638 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.756392 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.756370361 podStartE2EDuration="3.756370361s" podCreationTimestamp="2026-02-28 09:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:38.745780105 +0000 UTC m=+1262.436582926" watchObservedRunningTime="2026-02-28 09:21:38.756370361 +0000 UTC m=+1262.447173182" Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.796886 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-679bcc7697-9hs5j" podStartSLOduration=2.7968643220000002 podStartE2EDuration="2.796864322s" podCreationTimestamp="2026-02-28 09:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:38.7889208 +0000 UTC m=+1262.479723611" watchObservedRunningTime="2026-02-28 09:21:38.796864322 +0000 UTC m=+1262.487667143" Feb 28 09:21:38 crc kubenswrapper[4996]: I0228 09:21:38.850559 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.238624 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6ccc6bcbc4-2fmz9" Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.328300 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55cc9dfcd4-m6gv8"] Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.338353 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.339593 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.469278 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.489221 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-xfbnm"] Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.489665 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" podUID="1aa63814-587f-4839-bf7f-4bc1e02f8704" containerName="dnsmasq-dns" containerID="cri-o://ce842fd81361d42e7c8b9c73cfb896c359f0504e5bfbe215601faf167ab93b2d" gracePeriod=10 Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.723772 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.730190 4996 generic.go:334] "Generic (PLEG): container finished" podID="1aa63814-587f-4839-bf7f-4bc1e02f8704" containerID="ce842fd81361d42e7c8b9c73cfb896c359f0504e5bfbe215601faf167ab93b2d" exitCode=0 Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.730941 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" event={"ID":"1aa63814-587f-4839-bf7f-4bc1e02f8704","Type":"ContainerDied","Data":"ce842fd81361d42e7c8b9c73cfb896c359f0504e5bfbe215601faf167ab93b2d"} Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.731102 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55cc9dfcd4-m6gv8" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon-log" containerID="cri-o://d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a" gracePeriod=30 Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.731267 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerName="cinder-scheduler" containerID="cri-o://a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f" gracePeriod=30 Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.731710 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55cc9dfcd4-m6gv8" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon" containerID="cri-o://439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284" gracePeriod=30 Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.731925 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerName="probe" containerID="cri-o://a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405" gracePeriod=30 Feb 28 09:21:39 crc kubenswrapper[4996]: I0228 09:21:39.789374 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.026267 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.074052 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-sb\") pod \"1aa63814-587f-4839-bf7f-4bc1e02f8704\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.074121 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-config\") pod \"1aa63814-587f-4839-bf7f-4bc1e02f8704\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.074148 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-nb\") pod \"1aa63814-587f-4839-bf7f-4bc1e02f8704\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.074181 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-dns-svc\") pod \"1aa63814-587f-4839-bf7f-4bc1e02f8704\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.074294 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6x2q\" (UniqueName: \"kubernetes.io/projected/1aa63814-587f-4839-bf7f-4bc1e02f8704-kube-api-access-c6x2q\") pod \"1aa63814-587f-4839-bf7f-4bc1e02f8704\" (UID: \"1aa63814-587f-4839-bf7f-4bc1e02f8704\") " Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.077547 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-594c4f7c44-lnbrv"] Feb 28 09:21:40 crc kubenswrapper[4996]: E0228 09:21:40.077886 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa63814-587f-4839-bf7f-4bc1e02f8704" containerName="dnsmasq-dns" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.077902 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa63814-587f-4839-bf7f-4bc1e02f8704" containerName="dnsmasq-dns" Feb 28 09:21:40 crc kubenswrapper[4996]: E0228 09:21:40.077927 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1aa63814-587f-4839-bf7f-4bc1e02f8704" containerName="init" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.077933 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1aa63814-587f-4839-bf7f-4bc1e02f8704" containerName="init" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.078143 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aa63814-587f-4839-bf7f-4bc1e02f8704" containerName="dnsmasq-dns" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.078976 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.104000 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aa63814-587f-4839-bf7f-4bc1e02f8704-kube-api-access-c6x2q" (OuterVolumeSpecName: "kube-api-access-c6x2q") pod "1aa63814-587f-4839-bf7f-4bc1e02f8704" (UID: "1aa63814-587f-4839-bf7f-4bc1e02f8704"). InnerVolumeSpecName "kube-api-access-c6x2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.119654 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-594c4f7c44-lnbrv"] Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.164401 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1aa63814-587f-4839-bf7f-4bc1e02f8704" (UID: "1aa63814-587f-4839-bf7f-4bc1e02f8704"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.173136 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1aa63814-587f-4839-bf7f-4bc1e02f8704" (UID: "1aa63814-587f-4839-bf7f-4bc1e02f8704"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183010 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d17faf34-1a55-4544-8da0-2b15159ff1d6-logs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183349 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-internal-tls-certs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183684 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-public-tls-certs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183750 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-combined-ca-bundle\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183786 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-scripts\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183803 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2lwp\" (UniqueName: \"kubernetes.io/projected/d17faf34-1a55-4544-8da0-2b15159ff1d6-kube-api-access-p2lwp\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183851 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-config-data\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183949 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183960 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6x2q\" (UniqueName: \"kubernetes.io/projected/1aa63814-587f-4839-bf7f-4bc1e02f8704-kube-api-access-c6x2q\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.183971 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.192406 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1aa63814-587f-4839-bf7f-4bc1e02f8704" (UID: "1aa63814-587f-4839-bf7f-4bc1e02f8704"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.209876 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-config" (OuterVolumeSpecName: "config") pod "1aa63814-587f-4839-bf7f-4bc1e02f8704" (UID: "1aa63814-587f-4839-bf7f-4bc1e02f8704"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.285626 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-combined-ca-bundle\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.285679 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-scripts\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.285704 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2lwp\" (UniqueName: \"kubernetes.io/projected/d17faf34-1a55-4544-8da0-2b15159ff1d6-kube-api-access-p2lwp\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.285736 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-config-data\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.285792 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d17faf34-1a55-4544-8da0-2b15159ff1d6-logs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.285866 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-internal-tls-certs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.285935 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-public-tls-certs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.286014 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.286041 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aa63814-587f-4839-bf7f-4bc1e02f8704-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.288514 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d17faf34-1a55-4544-8da0-2b15159ff1d6-logs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.288689 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-scripts\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.288934 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-combined-ca-bundle\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.289606 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-config-data\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.289646 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-public-tls-certs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.292295 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d17faf34-1a55-4544-8da0-2b15159ff1d6-internal-tls-certs\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.302588 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2lwp\" (UniqueName: \"kubernetes.io/projected/d17faf34-1a55-4544-8da0-2b15159ff1d6-kube-api-access-p2lwp\") pod \"placement-594c4f7c44-lnbrv\" (UID: \"d17faf34-1a55-4544-8da0-2b15159ff1d6\") " pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.532319 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.757692 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.757691 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b946d459c-xfbnm" event={"ID":"1aa63814-587f-4839-bf7f-4bc1e02f8704","Type":"ContainerDied","Data":"c0938dd664e9b2a14382df9efa4a75c72b5eb7164c902d6af1906dd3b9fc6533"} Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.757828 4996 scope.go:117] "RemoveContainer" containerID="ce842fd81361d42e7c8b9c73cfb896c359f0504e5bfbe215601faf167ab93b2d" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.778885 4996 generic.go:334] "Generic (PLEG): container finished" podID="49585240-7b27-458c-8d70-d23d8326bb94" containerID="b7a95f3fc3578ddf33e205b07480ce2fe00ba8f0d6f4c709dce74c1bc923a0be" exitCode=0 Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.778969 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67b5c7c7f7-mzzc4" event={"ID":"49585240-7b27-458c-8d70-d23d8326bb94","Type":"ContainerDied","Data":"b7a95f3fc3578ddf33e205b07480ce2fe00ba8f0d6f4c709dce74c1bc923a0be"} Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.801123 4996 scope.go:117] "RemoveContainer" containerID="c17d816056f65a478e6a81374dbd561ff7bccc80162d2050e9f031a6b076126c" Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.804422 4996 generic.go:334] "Generic (PLEG): container finished" podID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerID="a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405" exitCode=0 Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.804784 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66d086f8-902e-46c4-ba7f-bb6549f618ac","Type":"ContainerDied","Data":"a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405"} Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.827981 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-xfbnm"] Feb 28 09:21:40 crc kubenswrapper[4996]: I0228 09:21:40.851737 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b946d459c-xfbnm"] Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.048387 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aa63814-587f-4839-bf7f-4bc1e02f8704" path="/var/lib/kubelet/pods/1aa63814-587f-4839-bf7f-4bc1e02f8704/volumes" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.170578 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.276675 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-594c4f7c44-lnbrv"] Feb 28 09:21:41 crc kubenswrapper[4996]: W0228 09:21:41.286259 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd17faf34_1a55_4544_8da0_2b15159ff1d6.slice/crio-0884e5362eaf8c7d0d68511ee2a619f1784061149394e9a6585ee6cb9c4bdc74 WatchSource:0}: Error finding container 0884e5362eaf8c7d0d68511ee2a619f1784061149394e9a6585ee6cb9c4bdc74: Status 404 returned error can't find the container with id 0884e5362eaf8c7d0d68511ee2a619f1784061149394e9a6585ee6cb9c4bdc74 Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.322823 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-public-tls-certs\") pod \"49585240-7b27-458c-8d70-d23d8326bb94\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.322901 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-combined-ca-bundle\") pod \"49585240-7b27-458c-8d70-d23d8326bb94\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.323340 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6cbp\" (UniqueName: \"kubernetes.io/projected/49585240-7b27-458c-8d70-d23d8326bb94-kube-api-access-n6cbp\") pod \"49585240-7b27-458c-8d70-d23d8326bb94\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.323409 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-config\") pod \"49585240-7b27-458c-8d70-d23d8326bb94\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.323465 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-internal-tls-certs\") pod \"49585240-7b27-458c-8d70-d23d8326bb94\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.323564 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-ovndb-tls-certs\") pod \"49585240-7b27-458c-8d70-d23d8326bb94\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.323703 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-httpd-config\") pod \"49585240-7b27-458c-8d70-d23d8326bb94\" (UID: \"49585240-7b27-458c-8d70-d23d8326bb94\") " Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.333553 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49585240-7b27-458c-8d70-d23d8326bb94-kube-api-access-n6cbp" (OuterVolumeSpecName: "kube-api-access-n6cbp") pod "49585240-7b27-458c-8d70-d23d8326bb94" (UID: "49585240-7b27-458c-8d70-d23d8326bb94"). InnerVolumeSpecName "kube-api-access-n6cbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.337301 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "49585240-7b27-458c-8d70-d23d8326bb94" (UID: "49585240-7b27-458c-8d70-d23d8326bb94"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.380977 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "49585240-7b27-458c-8d70-d23d8326bb94" (UID: "49585240-7b27-458c-8d70-d23d8326bb94"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.401609 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49585240-7b27-458c-8d70-d23d8326bb94" (UID: "49585240-7b27-458c-8d70-d23d8326bb94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.402757 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "49585240-7b27-458c-8d70-d23d8326bb94" (UID: "49585240-7b27-458c-8d70-d23d8326bb94"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.405577 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-config" (OuterVolumeSpecName: "config") pod "49585240-7b27-458c-8d70-d23d8326bb94" (UID: "49585240-7b27-458c-8d70-d23d8326bb94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.426665 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.426705 4996 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.426716 4996 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.426724 4996 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.426733 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.426742 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6cbp\" (UniqueName: \"kubernetes.io/projected/49585240-7b27-458c-8d70-d23d8326bb94-kube-api-access-n6cbp\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.444104 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "49585240-7b27-458c-8d70-d23d8326bb94" (UID: "49585240-7b27-458c-8d70-d23d8326bb94"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.528218 4996 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/49585240-7b27-458c-8d70-d23d8326bb94-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.826293 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-594c4f7c44-lnbrv" event={"ID":"d17faf34-1a55-4544-8da0-2b15159ff1d6","Type":"ContainerStarted","Data":"d1cf3ff13222e74f1ae8ca2264f435a8c6e214ab0f075bd32d093bb57b64d4d3"} Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.826662 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.826678 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-594c4f7c44-lnbrv" event={"ID":"d17faf34-1a55-4544-8da0-2b15159ff1d6","Type":"ContainerStarted","Data":"9ca207664d35181792d79fc05b1e5671b82424e1d0275e8df739d3c9fe6dcf0e"} Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.826690 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-594c4f7c44-lnbrv" event={"ID":"d17faf34-1a55-4544-8da0-2b15159ff1d6","Type":"ContainerStarted","Data":"0884e5362eaf8c7d0d68511ee2a619f1784061149394e9a6585ee6cb9c4bdc74"} Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.838112 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67b5c7c7f7-mzzc4" event={"ID":"49585240-7b27-458c-8d70-d23d8326bb94","Type":"ContainerDied","Data":"3206d9dd40f49a9f7a6438140869c2a4eb24cea75548ba0290fbc9b29b31b88f"} Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.838168 4996 scope.go:117] "RemoveContainer" containerID="541161af47e15e689a7869b89d735ee7e1ec2e404f5bdd2a6c84cafb7ae9f3bf" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.838212 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67b5c7c7f7-mzzc4" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.851981 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-594c4f7c44-lnbrv" podStartSLOduration=1.851970171 podStartE2EDuration="1.851970171s" podCreationTimestamp="2026-02-28 09:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:41.848073516 +0000 UTC m=+1265.538876327" watchObservedRunningTime="2026-02-28 09:21:41.851970171 +0000 UTC m=+1265.542772982" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.877948 4996 scope.go:117] "RemoveContainer" containerID="b7a95f3fc3578ddf33e205b07480ce2fe00ba8f0d6f4c709dce74c1bc923a0be" Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.883122 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67b5c7c7f7-mzzc4"] Feb 28 09:21:41 crc kubenswrapper[4996]: I0228 09:21:41.890654 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67b5c7c7f7-mzzc4"] Feb 28 09:21:42 crc kubenswrapper[4996]: I0228 09:21:42.851591 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:21:43 crc kubenswrapper[4996]: I0228 09:21:43.046862 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49585240-7b27-458c-8d70-d23d8326bb94" path="/var/lib/kubelet/pods/49585240-7b27-458c-8d70-d23d8326bb94/volumes" Feb 28 09:21:43 crc kubenswrapper[4996]: I0228 09:21:43.780849 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55cc9dfcd4-m6gv8" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 28 09:21:43 crc kubenswrapper[4996]: I0228 09:21:43.864369 4996 generic.go:334] "Generic (PLEG): container finished" podID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerID="439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284" exitCode=0 Feb 28 09:21:43 crc kubenswrapper[4996]: I0228 09:21:43.865286 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cc9dfcd4-m6gv8" event={"ID":"9dcd95e8-c193-47ef-bc21-acabccfcff53","Type":"ContainerDied","Data":"439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284"} Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.716681 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.788401 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-combined-ca-bundle\") pod \"66d086f8-902e-46c4-ba7f-bb6549f618ac\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.788472 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvsdd\" (UniqueName: \"kubernetes.io/projected/66d086f8-902e-46c4-ba7f-bb6549f618ac-kube-api-access-tvsdd\") pod \"66d086f8-902e-46c4-ba7f-bb6549f618ac\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.788502 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-scripts\") pod \"66d086f8-902e-46c4-ba7f-bb6549f618ac\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.788610 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data-custom\") pod \"66d086f8-902e-46c4-ba7f-bb6549f618ac\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.788656 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66d086f8-902e-46c4-ba7f-bb6549f618ac-etc-machine-id\") pod \"66d086f8-902e-46c4-ba7f-bb6549f618ac\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.788688 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data\") pod \"66d086f8-902e-46c4-ba7f-bb6549f618ac\" (UID: \"66d086f8-902e-46c4-ba7f-bb6549f618ac\") " Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.793373 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66d086f8-902e-46c4-ba7f-bb6549f618ac-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "66d086f8-902e-46c4-ba7f-bb6549f618ac" (UID: "66d086f8-902e-46c4-ba7f-bb6549f618ac"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.800576 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66d086f8-902e-46c4-ba7f-bb6549f618ac-kube-api-access-tvsdd" (OuterVolumeSpecName: "kube-api-access-tvsdd") pod "66d086f8-902e-46c4-ba7f-bb6549f618ac" (UID: "66d086f8-902e-46c4-ba7f-bb6549f618ac"). InnerVolumeSpecName "kube-api-access-tvsdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.810383 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "66d086f8-902e-46c4-ba7f-bb6549f618ac" (UID: "66d086f8-902e-46c4-ba7f-bb6549f618ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.810421 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-scripts" (OuterVolumeSpecName: "scripts") pod "66d086f8-902e-46c4-ba7f-bb6549f618ac" (UID: "66d086f8-902e-46c4-ba7f-bb6549f618ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.875634 4996 generic.go:334] "Generic (PLEG): container finished" podID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerID="a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f" exitCode=0 Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.875677 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66d086f8-902e-46c4-ba7f-bb6549f618ac","Type":"ContainerDied","Data":"a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f"} Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.875703 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"66d086f8-902e-46c4-ba7f-bb6549f618ac","Type":"ContainerDied","Data":"86d826ac5bc66d4e09e764bd779c0efdb0232855e37a208aa540782c3b13342e"} Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.875718 4996 scope.go:117] "RemoveContainer" containerID="a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.875807 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.889234 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66d086f8-902e-46c4-ba7f-bb6549f618ac" (UID: "66d086f8-902e-46c4-ba7f-bb6549f618ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.895352 4996 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.895524 4996 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/66d086f8-902e-46c4-ba7f-bb6549f618ac-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.895636 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.895715 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvsdd\" (UniqueName: \"kubernetes.io/projected/66d086f8-902e-46c4-ba7f-bb6549f618ac-kube-api-access-tvsdd\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.895813 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.902839 4996 scope.go:117] "RemoveContainer" containerID="a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.906026 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data" (OuterVolumeSpecName: "config-data") pod "66d086f8-902e-46c4-ba7f-bb6549f618ac" (UID: "66d086f8-902e-46c4-ba7f-bb6549f618ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.935282 4996 scope.go:117] "RemoveContainer" containerID="a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405" Feb 28 09:21:44 crc kubenswrapper[4996]: E0228 09:21:44.935656 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405\": container with ID starting with a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405 not found: ID does not exist" containerID="a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.935679 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405"} err="failed to get container status \"a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405\": rpc error: code = NotFound desc = could not find container \"a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405\": container with ID starting with a383c2552eeed78d23456fa444ab045061a52621a35184c997e28cadc7f2e405 not found: ID does not exist" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.935696 4996 scope.go:117] "RemoveContainer" containerID="a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f" Feb 28 09:21:44 crc kubenswrapper[4996]: E0228 09:21:44.935907 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f\": container with ID starting with a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f not found: ID does not exist" containerID="a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.935927 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f"} err="failed to get container status \"a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f\": rpc error: code = NotFound desc = could not find container \"a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f\": container with ID starting with a65622e56261f2459be822b202bf603068e15e5ef7109157c9b1afb365398f5f not found: ID does not exist" Feb 28 09:21:44 crc kubenswrapper[4996]: I0228 09:21:44.997937 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d086f8-902e-46c4-ba7f-bb6549f618ac-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.197264 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.203916 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.217771 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:45 crc kubenswrapper[4996]: E0228 09:21:45.218766 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-httpd" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.218794 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-httpd" Feb 28 09:21:45 crc kubenswrapper[4996]: E0228 09:21:45.218824 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerName="probe" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.218832 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerName="probe" Feb 28 09:21:45 crc kubenswrapper[4996]: E0228 09:21:45.218864 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-api" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.218874 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-api" Feb 28 09:21:45 crc kubenswrapper[4996]: E0228 09:21:45.218893 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerName="cinder-scheduler" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.218902 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerName="cinder-scheduler" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.219273 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerName="cinder-scheduler" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.219302 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-api" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.219315 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="49585240-7b27-458c-8d70-d23d8326bb94" containerName="neutron-httpd" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.219344 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" containerName="probe" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.240796 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.262212 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.262568 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.301760 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.301840 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-config-data\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.301868 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.301889 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjqc\" (UniqueName: \"kubernetes.io/projected/a57ea6ee-2619-4875-96e6-60622a9754d3-kube-api-access-4bjqc\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.301916 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a57ea6ee-2619-4875-96e6-60622a9754d3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.301948 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-scripts\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.403261 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.403331 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-config-data\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.403356 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.403379 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bjqc\" (UniqueName: \"kubernetes.io/projected/a57ea6ee-2619-4875-96e6-60622a9754d3-kube-api-access-4bjqc\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.403403 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a57ea6ee-2619-4875-96e6-60622a9754d3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.403434 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-scripts\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.403979 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a57ea6ee-2619-4875-96e6-60622a9754d3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.407802 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.410133 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-config-data\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.411413 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-scripts\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.417988 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a57ea6ee-2619-4875-96e6-60622a9754d3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.423818 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bjqc\" (UniqueName: \"kubernetes.io/projected/a57ea6ee-2619-4875-96e6-60622a9754d3-kube-api-access-4bjqc\") pod \"cinder-scheduler-0\" (UID: \"a57ea6ee-2619-4875-96e6-60622a9754d3\") " pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.460180 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-644f7b559b-gngw5" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.581880 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 09:21:45 crc kubenswrapper[4996]: I0228 09:21:45.924446 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.021401 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58dc667d-krgck" Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.102149 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d4548f44b-9pcrx"] Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.102572 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d4548f44b-9pcrx" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api-log" containerID="cri-o://afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790" gracePeriod=30 Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.102919 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5d4548f44b-9pcrx" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api" containerID="cri-o://f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf" gracePeriod=30 Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.156656 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.916323 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a57ea6ee-2619-4875-96e6-60622a9754d3","Type":"ContainerStarted","Data":"a2653ef866b52c1f1ba414bf515de0ac4a481476f9e40667cba3b7f965685499"} Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.916576 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a57ea6ee-2619-4875-96e6-60622a9754d3","Type":"ContainerStarted","Data":"443de72e8e9cb77d4cee5654d4cd4f18c34a75691e78dcd71568f8224a434bde"} Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.921230 4996 generic.go:334] "Generic (PLEG): container finished" podID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerID="afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790" exitCode=143 Feb 28 09:21:46 crc kubenswrapper[4996]: I0228 09:21:46.921320 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4548f44b-9pcrx" event={"ID":"47bd376f-eac1-4b4d-b765-24ea791625ae","Type":"ContainerDied","Data":"afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790"} Feb 28 09:21:47 crc kubenswrapper[4996]: I0228 09:21:47.043788 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66d086f8-902e-46c4-ba7f-bb6549f618ac" path="/var/lib/kubelet/pods/66d086f8-902e-46c4-ba7f-bb6549f618ac/volumes" Feb 28 09:21:47 crc kubenswrapper[4996]: I0228 09:21:47.931368 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a57ea6ee-2619-4875-96e6-60622a9754d3","Type":"ContainerStarted","Data":"f4047e643d3dbf9e25b55aa38534f3f9846fbe9304b5e38e647b9b061f524e44"} Feb 28 09:21:47 crc kubenswrapper[4996]: I0228 09:21:47.970122 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.970094532 podStartE2EDuration="2.970094532s" podCreationTimestamp="2026-02-28 09:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:21:47.946793377 +0000 UTC m=+1271.637596198" watchObservedRunningTime="2026-02-28 09:21:47.970094532 +0000 UTC m=+1271.660897353" Feb 28 09:21:48 crc kubenswrapper[4996]: I0228 09:21:48.327671 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.266160 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d4548f44b-9pcrx" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:34230->10.217.0.160:9311: read: connection reset by peer" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.266213 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5d4548f44b-9pcrx" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.160:9311/healthcheck\": read tcp 10.217.0.2:34220->10.217.0.160:9311: read: connection reset by peer" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.720949 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.804879 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-combined-ca-bundle\") pod \"47bd376f-eac1-4b4d-b765-24ea791625ae\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.804980 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbrnr\" (UniqueName: \"kubernetes.io/projected/47bd376f-eac1-4b4d-b765-24ea791625ae-kube-api-access-dbrnr\") pod \"47bd376f-eac1-4b4d-b765-24ea791625ae\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.805084 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data\") pod \"47bd376f-eac1-4b4d-b765-24ea791625ae\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.805112 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data-custom\") pod \"47bd376f-eac1-4b4d-b765-24ea791625ae\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.805193 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47bd376f-eac1-4b4d-b765-24ea791625ae-logs\") pod \"47bd376f-eac1-4b4d-b765-24ea791625ae\" (UID: \"47bd376f-eac1-4b4d-b765-24ea791625ae\") " Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.806135 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47bd376f-eac1-4b4d-b765-24ea791625ae-logs" (OuterVolumeSpecName: "logs") pod "47bd376f-eac1-4b4d-b765-24ea791625ae" (UID: "47bd376f-eac1-4b4d-b765-24ea791625ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.810134 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "47bd376f-eac1-4b4d-b765-24ea791625ae" (UID: "47bd376f-eac1-4b4d-b765-24ea791625ae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.813076 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47bd376f-eac1-4b4d-b765-24ea791625ae-kube-api-access-dbrnr" (OuterVolumeSpecName: "kube-api-access-dbrnr") pod "47bd376f-eac1-4b4d-b765-24ea791625ae" (UID: "47bd376f-eac1-4b4d-b765-24ea791625ae"). InnerVolumeSpecName "kube-api-access-dbrnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.844167 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47bd376f-eac1-4b4d-b765-24ea791625ae" (UID: "47bd376f-eac1-4b4d-b765-24ea791625ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.860128 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data" (OuterVolumeSpecName: "config-data") pod "47bd376f-eac1-4b4d-b765-24ea791625ae" (UID: "47bd376f-eac1-4b4d-b765-24ea791625ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.907711 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/47bd376f-eac1-4b4d-b765-24ea791625ae-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.907756 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.907770 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbrnr\" (UniqueName: \"kubernetes.io/projected/47bd376f-eac1-4b4d-b765-24ea791625ae-kube-api-access-dbrnr\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.907782 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.907795 4996 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/47bd376f-eac1-4b4d-b765-24ea791625ae-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.949861 4996 generic.go:334] "Generic (PLEG): container finished" podID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerID="f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf" exitCode=0 Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.949923 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5d4548f44b-9pcrx" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.949940 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4548f44b-9pcrx" event={"ID":"47bd376f-eac1-4b4d-b765-24ea791625ae","Type":"ContainerDied","Data":"f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf"} Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.950314 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5d4548f44b-9pcrx" event={"ID":"47bd376f-eac1-4b4d-b765-24ea791625ae","Type":"ContainerDied","Data":"87e609050aa58f90b844fc1c73023b718c5b0a35359d2ca2d82924956faa49a6"} Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.950333 4996 scope.go:117] "RemoveContainer" containerID="f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf" Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.994047 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5d4548f44b-9pcrx"] Feb 28 09:21:49 crc kubenswrapper[4996]: I0228 09:21:49.997128 4996 scope.go:117] "RemoveContainer" containerID="afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.013381 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5d4548f44b-9pcrx"] Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.027908 4996 scope.go:117] "RemoveContainer" containerID="f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf" Feb 28 09:21:50 crc kubenswrapper[4996]: E0228 09:21:50.029507 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf\": container with ID starting with f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf not found: ID does not exist" containerID="f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.029562 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf"} err="failed to get container status \"f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf\": rpc error: code = NotFound desc = could not find container \"f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf\": container with ID starting with f183aca41518589db35ce27a3ed3c61ab2e7fede352fac455b024a5c665c0caf not found: ID does not exist" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.029594 4996 scope.go:117] "RemoveContainer" containerID="afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790" Feb 28 09:21:50 crc kubenswrapper[4996]: E0228 09:21:50.029990 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790\": container with ID starting with afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790 not found: ID does not exist" containerID="afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.030035 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790"} err="failed to get container status \"afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790\": rpc error: code = NotFound desc = could not find container \"afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790\": container with ID starting with afbf114971a4e5954f1717fe403b2048dcc28630974351257c2bb54339292790 not found: ID does not exist" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.118178 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 28 09:21:50 crc kubenswrapper[4996]: E0228 09:21:50.118590 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.118611 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api" Feb 28 09:21:50 crc kubenswrapper[4996]: E0228 09:21:50.118656 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api-log" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.118667 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api-log" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.118862 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.118906 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" containerName="barbican-api-log" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.119617 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.122735 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.124266 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4cqw7" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.131310 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.132151 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.213596 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ea848d22-46ca-46ec-a5e7-5b26014b569b-openstack-config-secret\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.213872 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvwsh\" (UniqueName: \"kubernetes.io/projected/ea848d22-46ca-46ec-a5e7-5b26014b569b-kube-api-access-bvwsh\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.214516 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea848d22-46ca-46ec-a5e7-5b26014b569b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.214642 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ea848d22-46ca-46ec-a5e7-5b26014b569b-openstack-config\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.316889 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvwsh\" (UniqueName: \"kubernetes.io/projected/ea848d22-46ca-46ec-a5e7-5b26014b569b-kube-api-access-bvwsh\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.317208 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea848d22-46ca-46ec-a5e7-5b26014b569b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.317323 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ea848d22-46ca-46ec-a5e7-5b26014b569b-openstack-config\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.317474 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ea848d22-46ca-46ec-a5e7-5b26014b569b-openstack-config-secret\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.318225 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ea848d22-46ca-46ec-a5e7-5b26014b569b-openstack-config\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.321664 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ea848d22-46ca-46ec-a5e7-5b26014b569b-openstack-config-secret\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.321776 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea848d22-46ca-46ec-a5e7-5b26014b569b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.340559 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvwsh\" (UniqueName: \"kubernetes.io/projected/ea848d22-46ca-46ec-a5e7-5b26014b569b-kube-api-access-bvwsh\") pod \"openstackclient\" (UID: \"ea848d22-46ca-46ec-a5e7-5b26014b569b\") " pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.434522 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.582460 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.908265 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 09:21:50 crc kubenswrapper[4996]: W0228 09:21:50.925734 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea848d22_46ca_46ec_a5e7_5b26014b569b.slice/crio-cfc5be35ea5304df2fb6e44bcd86a347bf96d8e3814cac101205ad34084b102c WatchSource:0}: Error finding container cfc5be35ea5304df2fb6e44bcd86a347bf96d8e3814cac101205ad34084b102c: Status 404 returned error can't find the container with id cfc5be35ea5304df2fb6e44bcd86a347bf96d8e3814cac101205ad34084b102c Feb 28 09:21:50 crc kubenswrapper[4996]: I0228 09:21:50.958654 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ea848d22-46ca-46ec-a5e7-5b26014b569b","Type":"ContainerStarted","Data":"cfc5be35ea5304df2fb6e44bcd86a347bf96d8e3814cac101205ad34084b102c"} Feb 28 09:21:51 crc kubenswrapper[4996]: I0228 09:21:51.046824 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47bd376f-eac1-4b4d-b765-24ea791625ae" path="/var/lib/kubelet/pods/47bd376f-eac1-4b4d-b765-24ea791625ae/volumes" Feb 28 09:21:53 crc kubenswrapper[4996]: I0228 09:21:53.781376 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55cc9dfcd4-m6gv8" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 28 09:21:55 crc kubenswrapper[4996]: I0228 09:21:55.624858 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 09:21:55 crc kubenswrapper[4996]: I0228 09:21:55.822292 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 28 09:21:58 crc kubenswrapper[4996]: I0228 09:21:58.506159 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:21:58 crc kubenswrapper[4996]: I0228 09:21:58.506776 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="ceilometer-central-agent" containerID="cri-o://73c8e71acfa6f183593e97ea7caafc7f6a9df02c024990653e208621b241da19" gracePeriod=30 Feb 28 09:21:58 crc kubenswrapper[4996]: I0228 09:21:58.506843 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="proxy-httpd" containerID="cri-o://b720c2e1fcdafeecae7d910021262c3d1750fef406a76cac4d0493f1ce2d67c9" gracePeriod=30 Feb 28 09:21:58 crc kubenswrapper[4996]: I0228 09:21:58.506883 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="ceilometer-notification-agent" containerID="cri-o://060f4ca35f7c41a8299b539abbc132581de1b121486fbd1bf8491538f5b2a3b7" gracePeriod=30 Feb 28 09:21:58 crc kubenswrapper[4996]: I0228 09:21:58.506897 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="sg-core" containerID="cri-o://9c475fe616c7c23a6828208b2ea6a8403a9427ba587656702733dc500feac909" gracePeriod=30 Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.039895 4996 generic.go:334] "Generic (PLEG): container finished" podID="add6b806-1241-4e02-9959-0dd147b74abe" containerID="b720c2e1fcdafeecae7d910021262c3d1750fef406a76cac4d0493f1ce2d67c9" exitCode=0 Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.039931 4996 generic.go:334] "Generic (PLEG): container finished" podID="add6b806-1241-4e02-9959-0dd147b74abe" containerID="9c475fe616c7c23a6828208b2ea6a8403a9427ba587656702733dc500feac909" exitCode=2 Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.039944 4996 generic.go:334] "Generic (PLEG): container finished" podID="add6b806-1241-4e02-9959-0dd147b74abe" containerID="060f4ca35f7c41a8299b539abbc132581de1b121486fbd1bf8491538f5b2a3b7" exitCode=0 Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.039954 4996 generic.go:334] "Generic (PLEG): container finished" podID="add6b806-1241-4e02-9959-0dd147b74abe" containerID="73c8e71acfa6f183593e97ea7caafc7f6a9df02c024990653e208621b241da19" exitCode=0 Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.042088 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerDied","Data":"b720c2e1fcdafeecae7d910021262c3d1750fef406a76cac4d0493f1ce2d67c9"} Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.042124 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerDied","Data":"9c475fe616c7c23a6828208b2ea6a8403a9427ba587656702733dc500feac909"} Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.042137 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerDied","Data":"060f4ca35f7c41a8299b539abbc132581de1b121486fbd1bf8491538f5b2a3b7"} Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.042148 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerDied","Data":"73c8e71acfa6f183593e97ea7caafc7f6a9df02c024990653e208621b241da19"} Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.677752 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.828248 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn7ht\" (UniqueName: \"kubernetes.io/projected/add6b806-1241-4e02-9959-0dd147b74abe-kube-api-access-hn7ht\") pod \"add6b806-1241-4e02-9959-0dd147b74abe\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.828324 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-combined-ca-bundle\") pod \"add6b806-1241-4e02-9959-0dd147b74abe\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.828358 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-config-data\") pod \"add6b806-1241-4e02-9959-0dd147b74abe\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.828469 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-run-httpd\") pod \"add6b806-1241-4e02-9959-0dd147b74abe\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.828495 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-sg-core-conf-yaml\") pod \"add6b806-1241-4e02-9959-0dd147b74abe\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.828530 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-log-httpd\") pod \"add6b806-1241-4e02-9959-0dd147b74abe\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.828551 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-scripts\") pod \"add6b806-1241-4e02-9959-0dd147b74abe\" (UID: \"add6b806-1241-4e02-9959-0dd147b74abe\") " Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.829289 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "add6b806-1241-4e02-9959-0dd147b74abe" (UID: "add6b806-1241-4e02-9959-0dd147b74abe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.829349 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "add6b806-1241-4e02-9959-0dd147b74abe" (UID: "add6b806-1241-4e02-9959-0dd147b74abe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.834197 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/add6b806-1241-4e02-9959-0dd147b74abe-kube-api-access-hn7ht" (OuterVolumeSpecName: "kube-api-access-hn7ht") pod "add6b806-1241-4e02-9959-0dd147b74abe" (UID: "add6b806-1241-4e02-9959-0dd147b74abe"). InnerVolumeSpecName "kube-api-access-hn7ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.835342 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-scripts" (OuterVolumeSpecName: "scripts") pod "add6b806-1241-4e02-9959-0dd147b74abe" (UID: "add6b806-1241-4e02-9959-0dd147b74abe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.868613 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "add6b806-1241-4e02-9959-0dd147b74abe" (UID: "add6b806-1241-4e02-9959-0dd147b74abe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.911447 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "add6b806-1241-4e02-9959-0dd147b74abe" (UID: "add6b806-1241-4e02-9959-0dd147b74abe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.929750 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-config-data" (OuterVolumeSpecName: "config-data") pod "add6b806-1241-4e02-9959-0dd147b74abe" (UID: "add6b806-1241-4e02-9959-0dd147b74abe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.930847 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.930893 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.930913 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/add6b806-1241-4e02-9959-0dd147b74abe-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.930931 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.930950 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn7ht\" (UniqueName: \"kubernetes.io/projected/add6b806-1241-4e02-9959-0dd147b74abe-kube-api-access-hn7ht\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.930968 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:21:59 crc kubenswrapper[4996]: I0228 09:21:59.930985 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/add6b806-1241-4e02-9959-0dd147b74abe-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.048888 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ea848d22-46ca-46ec-a5e7-5b26014b569b","Type":"ContainerStarted","Data":"3f5d8cc28085b040aa38d010bd7d138e1957d83c36d2f441955b2a96ef989a7f"} Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.054458 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"add6b806-1241-4e02-9959-0dd147b74abe","Type":"ContainerDied","Data":"e48c4c1073e9c15213ec9134be1c22ea331918604c82ed247a83877e4f13c0dc"} Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.054516 4996 scope.go:117] "RemoveContainer" containerID="b720c2e1fcdafeecae7d910021262c3d1750fef406a76cac4d0493f1ce2d67c9" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.054617 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.072211 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.607611444 podStartE2EDuration="10.072188213s" podCreationTimestamp="2026-02-28 09:21:50 +0000 UTC" firstStartedPulling="2026-02-28 09:21:50.927334941 +0000 UTC m=+1274.618137752" lastFinishedPulling="2026-02-28 09:21:59.39191171 +0000 UTC m=+1283.082714521" observedRunningTime="2026-02-28 09:22:00.071189159 +0000 UTC m=+1283.761991960" watchObservedRunningTime="2026-02-28 09:22:00.072188213 +0000 UTC m=+1283.762991044" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.092740 4996 scope.go:117] "RemoveContainer" containerID="9c475fe616c7c23a6828208b2ea6a8403a9427ba587656702733dc500feac909" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.108870 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.129974 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.130239 4996 scope.go:117] "RemoveContainer" containerID="060f4ca35f7c41a8299b539abbc132581de1b121486fbd1bf8491538f5b2a3b7" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.139412 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:00 crc kubenswrapper[4996]: E0228 09:22:00.139870 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="proxy-httpd" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.139894 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="proxy-httpd" Feb 28 09:22:00 crc kubenswrapper[4996]: E0228 09:22:00.139912 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="ceilometer-central-agent" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.139920 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="ceilometer-central-agent" Feb 28 09:22:00 crc kubenswrapper[4996]: E0228 09:22:00.139951 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="ceilometer-notification-agent" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.139959 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="ceilometer-notification-agent" Feb 28 09:22:00 crc kubenswrapper[4996]: E0228 09:22:00.139972 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="sg-core" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.139979 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="sg-core" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.140221 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="sg-core" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.140261 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="ceilometer-notification-agent" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.140284 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="ceilometer-central-agent" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.140296 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="add6b806-1241-4e02-9959-0dd147b74abe" containerName="proxy-httpd" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.142232 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.147868 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537842-lvg4x"] Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.149052 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.157479 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537842-lvg4x"] Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.157729 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.157785 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.158084 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.158858 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.158985 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.167153 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.174338 4996 scope.go:117] "RemoveContainer" containerID="73c8e71acfa6f183593e97ea7caafc7f6a9df02c024990653e208621b241da19" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.239269 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-config-data\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.239323 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.239376 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-log-httpd\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.239450 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmqf6\" (UniqueName: \"kubernetes.io/projected/86f3768c-1150-4715-b5ef-17cc1471697d-kube-api-access-mmqf6\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.239521 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.239543 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phvs9\" (UniqueName: \"kubernetes.io/projected/9b3aa079-adbe-4f89-a7cb-7cece7b04a9d-kube-api-access-phvs9\") pod \"auto-csr-approver-29537842-lvg4x\" (UID: \"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d\") " pod="openshift-infra/auto-csr-approver-29537842-lvg4x" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.239580 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-scripts\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.239617 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-run-httpd\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344143 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-config-data\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344200 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344250 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-log-httpd\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344295 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmqf6\" (UniqueName: \"kubernetes.io/projected/86f3768c-1150-4715-b5ef-17cc1471697d-kube-api-access-mmqf6\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344343 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344366 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phvs9\" (UniqueName: \"kubernetes.io/projected/9b3aa079-adbe-4f89-a7cb-7cece7b04a9d-kube-api-access-phvs9\") pod \"auto-csr-approver-29537842-lvg4x\" (UID: \"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d\") " pod="openshift-infra/auto-csr-approver-29537842-lvg4x" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344408 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-scripts\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344441 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-run-httpd\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.344962 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-run-httpd\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.345153 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-log-httpd\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.348646 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-scripts\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.349537 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.354123 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-config-data\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.355615 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.365657 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmqf6\" (UniqueName: \"kubernetes.io/projected/86f3768c-1150-4715-b5ef-17cc1471697d-kube-api-access-mmqf6\") pod \"ceilometer-0\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.368312 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phvs9\" (UniqueName: \"kubernetes.io/projected/9b3aa079-adbe-4f89-a7cb-7cece7b04a9d-kube-api-access-phvs9\") pod \"auto-csr-approver-29537842-lvg4x\" (UID: \"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d\") " pod="openshift-infra/auto-csr-approver-29537842-lvg4x" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.477748 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.486872 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" Feb 28 09:22:00 crc kubenswrapper[4996]: I0228 09:22:00.545208 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:01 crc kubenswrapper[4996]: I0228 09:22:01.003618 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:01 crc kubenswrapper[4996]: W0228 09:22:01.006531 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f3768c_1150_4715_b5ef_17cc1471697d.slice/crio-862b3c75dc19d8b6f86bafbf76431e8e52ce7646e26502dfb1a1c2f9ef446594 WatchSource:0}: Error finding container 862b3c75dc19d8b6f86bafbf76431e8e52ce7646e26502dfb1a1c2f9ef446594: Status 404 returned error can't find the container with id 862b3c75dc19d8b6f86bafbf76431e8e52ce7646e26502dfb1a1c2f9ef446594 Feb 28 09:22:01 crc kubenswrapper[4996]: I0228 09:22:01.042857 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="add6b806-1241-4e02-9959-0dd147b74abe" path="/var/lib/kubelet/pods/add6b806-1241-4e02-9959-0dd147b74abe/volumes" Feb 28 09:22:01 crc kubenswrapper[4996]: W0228 09:22:01.070510 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b3aa079_adbe_4f89_a7cb_7cece7b04a9d.slice/crio-b7b81beb8871bd7b8535d527510205a0b4e57f96c1aa727218eaf090fcd66941 WatchSource:0}: Error finding container b7b81beb8871bd7b8535d527510205a0b4e57f96c1aa727218eaf090fcd66941: Status 404 returned error can't find the container with id b7b81beb8871bd7b8535d527510205a0b4e57f96c1aa727218eaf090fcd66941 Feb 28 09:22:01 crc kubenswrapper[4996]: I0228 09:22:01.071090 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537842-lvg4x"] Feb 28 09:22:01 crc kubenswrapper[4996]: I0228 09:22:01.075056 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerStarted","Data":"862b3c75dc19d8b6f86bafbf76431e8e52ce7646e26502dfb1a1c2f9ef446594"} Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.065169 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-pm9cp"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.066453 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.073486 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pm9cp"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.088989 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerStarted","Data":"cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce"} Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.090995 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" event={"ID":"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d","Type":"ContainerStarted","Data":"b7b81beb8871bd7b8535d527510205a0b4e57f96c1aa727218eaf090fcd66941"} Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.191757 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwhmg\" (UniqueName: \"kubernetes.io/projected/71586534-5889-4027-8688-9b5b3e3394ea-kube-api-access-jwhmg\") pod \"nova-api-db-create-pm9cp\" (UID: \"71586534-5889-4027-8688-9b5b3e3394ea\") " pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.191886 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71586534-5889-4027-8688-9b5b3e3394ea-operator-scripts\") pod \"nova-api-db-create-pm9cp\" (UID: \"71586534-5889-4027-8688-9b5b3e3394ea\") " pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.236240 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bg572"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.237646 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.257730 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bg572"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.280450 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-815f-account-create-update-wg8b6"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.282927 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.285428 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.292157 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-815f-account-create-update-wg8b6"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.295997 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwhmg\" (UniqueName: \"kubernetes.io/projected/71586534-5889-4027-8688-9b5b3e3394ea-kube-api-access-jwhmg\") pod \"nova-api-db-create-pm9cp\" (UID: \"71586534-5889-4027-8688-9b5b3e3394ea\") " pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.297668 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71586534-5889-4027-8688-9b5b3e3394ea-operator-scripts\") pod \"nova-api-db-create-pm9cp\" (UID: \"71586534-5889-4027-8688-9b5b3e3394ea\") " pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.302475 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71586534-5889-4027-8688-9b5b3e3394ea-operator-scripts\") pod \"nova-api-db-create-pm9cp\" (UID: \"71586534-5889-4027-8688-9b5b3e3394ea\") " pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.317688 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwhmg\" (UniqueName: \"kubernetes.io/projected/71586534-5889-4027-8688-9b5b3e3394ea-kube-api-access-jwhmg\") pod \"nova-api-db-create-pm9cp\" (UID: \"71586534-5889-4027-8688-9b5b3e3394ea\") " pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.379318 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-j68mh"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.381123 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.382282 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.389635 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-j68mh"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.400434 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9363ee4e-971c-4a87-9c13-a349d02ac678-operator-scripts\") pod \"nova-api-815f-account-create-update-wg8b6\" (UID: \"9363ee4e-971c-4a87-9c13-a349d02ac678\") " pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.400474 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm258\" (UniqueName: \"kubernetes.io/projected/46352d7d-4e62-4a29-8814-a8e2e33ef813-kube-api-access-jm258\") pod \"nova-cell0-db-create-bg572\" (UID: \"46352d7d-4e62-4a29-8814-a8e2e33ef813\") " pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.400531 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmkrl\" (UniqueName: \"kubernetes.io/projected/9363ee4e-971c-4a87-9c13-a349d02ac678-kube-api-access-nmkrl\") pod \"nova-api-815f-account-create-update-wg8b6\" (UID: \"9363ee4e-971c-4a87-9c13-a349d02ac678\") " pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.400580 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46352d7d-4e62-4a29-8814-a8e2e33ef813-operator-scripts\") pod \"nova-cell0-db-create-bg572\" (UID: \"46352d7d-4e62-4a29-8814-a8e2e33ef813\") " pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.410907 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3a07-account-create-update-8d4gz"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.412517 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.416980 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.427854 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3a07-account-create-update-8d4gz"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.502499 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctg7t\" (UniqueName: \"kubernetes.io/projected/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-kube-api-access-ctg7t\") pod \"nova-cell1-db-create-j68mh\" (UID: \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\") " pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.504359 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmkrl\" (UniqueName: \"kubernetes.io/projected/9363ee4e-971c-4a87-9c13-a349d02ac678-kube-api-access-nmkrl\") pod \"nova-api-815f-account-create-update-wg8b6\" (UID: \"9363ee4e-971c-4a87-9c13-a349d02ac678\") " pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.504412 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-operator-scripts\") pod \"nova-cell1-db-create-j68mh\" (UID: \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\") " pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.504558 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46352d7d-4e62-4a29-8814-a8e2e33ef813-operator-scripts\") pod \"nova-cell0-db-create-bg572\" (UID: \"46352d7d-4e62-4a29-8814-a8e2e33ef813\") " pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.504778 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9363ee4e-971c-4a87-9c13-a349d02ac678-operator-scripts\") pod \"nova-api-815f-account-create-update-wg8b6\" (UID: \"9363ee4e-971c-4a87-9c13-a349d02ac678\") " pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.504804 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm258\" (UniqueName: \"kubernetes.io/projected/46352d7d-4e62-4a29-8814-a8e2e33ef813-kube-api-access-jm258\") pod \"nova-cell0-db-create-bg572\" (UID: \"46352d7d-4e62-4a29-8814-a8e2e33ef813\") " pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.507615 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46352d7d-4e62-4a29-8814-a8e2e33ef813-operator-scripts\") pod \"nova-cell0-db-create-bg572\" (UID: \"46352d7d-4e62-4a29-8814-a8e2e33ef813\") " pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.509270 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9363ee4e-971c-4a87-9c13-a349d02ac678-operator-scripts\") pod \"nova-api-815f-account-create-update-wg8b6\" (UID: \"9363ee4e-971c-4a87-9c13-a349d02ac678\") " pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.525915 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm258\" (UniqueName: \"kubernetes.io/projected/46352d7d-4e62-4a29-8814-a8e2e33ef813-kube-api-access-jm258\") pod \"nova-cell0-db-create-bg572\" (UID: \"46352d7d-4e62-4a29-8814-a8e2e33ef813\") " pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.528424 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmkrl\" (UniqueName: \"kubernetes.io/projected/9363ee4e-971c-4a87-9c13-a349d02ac678-kube-api-access-nmkrl\") pod \"nova-api-815f-account-create-update-wg8b6\" (UID: \"9363ee4e-971c-4a87-9c13-a349d02ac678\") " pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.573607 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.599773 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-9c05-account-create-update-wvb9n"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.600789 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.603493 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.606963 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctg7t\" (UniqueName: \"kubernetes.io/projected/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-kube-api-access-ctg7t\") pod \"nova-cell1-db-create-j68mh\" (UID: \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\") " pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.607050 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-operator-scripts\") pod \"nova-cell1-db-create-j68mh\" (UID: \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\") " pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.607119 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5cca9a-4b23-405e-afed-de5776d7a46e-operator-scripts\") pod \"nova-cell0-3a07-account-create-update-8d4gz\" (UID: \"0c5cca9a-4b23-405e-afed-de5776d7a46e\") " pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.607148 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ptjh\" (UniqueName: \"kubernetes.io/projected/0c5cca9a-4b23-405e-afed-de5776d7a46e-kube-api-access-7ptjh\") pod \"nova-cell0-3a07-account-create-update-8d4gz\" (UID: \"0c5cca9a-4b23-405e-afed-de5776d7a46e\") " pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.607997 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-operator-scripts\") pod \"nova-cell1-db-create-j68mh\" (UID: \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\") " pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.628872 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctg7t\" (UniqueName: \"kubernetes.io/projected/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-kube-api-access-ctg7t\") pod \"nova-cell1-db-create-j68mh\" (UID: \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\") " pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.631437 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9c05-account-create-update-wvb9n"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.692779 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.706854 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.710476 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5cca9a-4b23-405e-afed-de5776d7a46e-operator-scripts\") pod \"nova-cell0-3a07-account-create-update-8d4gz\" (UID: \"0c5cca9a-4b23-405e-afed-de5776d7a46e\") " pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.710700 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ptjh\" (UniqueName: \"kubernetes.io/projected/0c5cca9a-4b23-405e-afed-de5776d7a46e-kube-api-access-7ptjh\") pod \"nova-cell0-3a07-account-create-update-8d4gz\" (UID: \"0c5cca9a-4b23-405e-afed-de5776d7a46e\") " pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.711149 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn267\" (UniqueName: \"kubernetes.io/projected/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-kube-api-access-zn267\") pod \"nova-cell1-9c05-account-create-update-wvb9n\" (UID: \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\") " pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.711259 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-operator-scripts\") pod \"nova-cell1-9c05-account-create-update-wvb9n\" (UID: \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\") " pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.714656 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5cca9a-4b23-405e-afed-de5776d7a46e-operator-scripts\") pod \"nova-cell0-3a07-account-create-update-8d4gz\" (UID: \"0c5cca9a-4b23-405e-afed-de5776d7a46e\") " pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.732574 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ptjh\" (UniqueName: \"kubernetes.io/projected/0c5cca9a-4b23-405e-afed-de5776d7a46e-kube-api-access-7ptjh\") pod \"nova-cell0-3a07-account-create-update-8d4gz\" (UID: \"0c5cca9a-4b23-405e-afed-de5776d7a46e\") " pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.809565 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.818957 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-operator-scripts\") pod \"nova-cell1-9c05-account-create-update-wvb9n\" (UID: \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\") " pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.819097 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn267\" (UniqueName: \"kubernetes.io/projected/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-kube-api-access-zn267\") pod \"nova-cell1-9c05-account-create-update-wvb9n\" (UID: \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\") " pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.820103 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-operator-scripts\") pod \"nova-cell1-9c05-account-create-update-wvb9n\" (UID: \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\") " pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.847143 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn267\" (UniqueName: \"kubernetes.io/projected/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-kube-api-access-zn267\") pod \"nova-cell1-9c05-account-create-update-wvb9n\" (UID: \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\") " pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.906113 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-pm9cp"] Feb 28 09:22:02 crc kubenswrapper[4996]: I0228 09:22:02.934724 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.112052 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerStarted","Data":"0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db"} Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.112113 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerStarted","Data":"4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822"} Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.114144 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bg572"] Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.134070 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" event={"ID":"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d","Type":"ContainerStarted","Data":"b8e0af30f13c87d932e02a24fd301666746b44c74e03bad427b1c2bdf7b48bf9"} Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.149257 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pm9cp" event={"ID":"71586534-5889-4027-8688-9b5b3e3394ea","Type":"ContainerStarted","Data":"3d62a2a8b2413d96d2bac7ceb01a2a1c7f2ca82d5f07289414e3586ef1f80b69"} Feb 28 09:22:03 crc kubenswrapper[4996]: W0228 09:22:03.149336 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46352d7d_4e62_4a29_8814_a8e2e33ef813.slice/crio-faadbfb41bd2735516f87f4b422f55469a116dc7f44f0dc5afc49c8cfcbd65db WatchSource:0}: Error finding container faadbfb41bd2735516f87f4b422f55469a116dc7f44f0dc5afc49c8cfcbd65db: Status 404 returned error can't find the container with id faadbfb41bd2735516f87f4b422f55469a116dc7f44f0dc5afc49c8cfcbd65db Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.173345 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" podStartSLOduration=2.088805116 podStartE2EDuration="3.173327747s" podCreationTimestamp="2026-02-28 09:22:00 +0000 UTC" firstStartedPulling="2026-02-28 09:22:01.076301368 +0000 UTC m=+1284.767104179" lastFinishedPulling="2026-02-28 09:22:02.160824009 +0000 UTC m=+1285.851626810" observedRunningTime="2026-02-28 09:22:03.16727696 +0000 UTC m=+1286.858079781" watchObservedRunningTime="2026-02-28 09:22:03.173327747 +0000 UTC m=+1286.864130558" Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.369033 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-815f-account-create-update-wg8b6"] Feb 28 09:22:03 crc kubenswrapper[4996]: W0228 09:22:03.375908 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9363ee4e_971c_4a87_9c13_a349d02ac678.slice/crio-ba982ebbd0d9ebd02b222aa27cce2af8412d706551f366447c651cc808dcb6e3 WatchSource:0}: Error finding container ba982ebbd0d9ebd02b222aa27cce2af8412d706551f366447c651cc808dcb6e3: Status 404 returned error can't find the container with id ba982ebbd0d9ebd02b222aa27cce2af8412d706551f366447c651cc808dcb6e3 Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.502493 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-j68mh"] Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.615062 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-9c05-account-create-update-wvb9n"] Feb 28 09:22:03 crc kubenswrapper[4996]: W0228 09:22:03.619120 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d4bc4b2_0222_49fc_995c_0b809d5e19fe.slice/crio-94d3a7b33ac12d174bcff5d9d32d27f1339e7e6b12dce9fd09f23a07adae8e03 WatchSource:0}: Error finding container 94d3a7b33ac12d174bcff5d9d32d27f1339e7e6b12dce9fd09f23a07adae8e03: Status 404 returned error can't find the container with id 94d3a7b33ac12d174bcff5d9d32d27f1339e7e6b12dce9fd09f23a07adae8e03 Feb 28 09:22:03 crc kubenswrapper[4996]: W0228 09:22:03.625958 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c5cca9a_4b23_405e_afed_de5776d7a46e.slice/crio-6f50e5f6a94882c05f8a43ae912ea74495aa7c294aecd4b326e76525f39e78b8 WatchSource:0}: Error finding container 6f50e5f6a94882c05f8a43ae912ea74495aa7c294aecd4b326e76525f39e78b8: Status 404 returned error can't find the container with id 6f50e5f6a94882c05f8a43ae912ea74495aa7c294aecd4b326e76525f39e78b8 Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.632597 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3a07-account-create-update-8d4gz"] Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.781362 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55cc9dfcd4-m6gv8" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Feb 28 09:22:03 crc kubenswrapper[4996]: I0228 09:22:03.781481 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.161761 4996 generic.go:334] "Generic (PLEG): container finished" podID="71586534-5889-4027-8688-9b5b3e3394ea" containerID="68153e18b02681effe53edc7513fee05c8b06db05573f0b6ab718cef7ebd74f6" exitCode=0 Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.162203 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pm9cp" event={"ID":"71586534-5889-4027-8688-9b5b3e3394ea","Type":"ContainerDied","Data":"68153e18b02681effe53edc7513fee05c8b06db05573f0b6ab718cef7ebd74f6"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.163974 4996 generic.go:334] "Generic (PLEG): container finished" podID="0c5cca9a-4b23-405e-afed-de5776d7a46e" containerID="7c86db71eab66a95cd99adf9766be543240f94c3d2efb6c2c09e8b95b07bfa8c" exitCode=0 Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.164058 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" event={"ID":"0c5cca9a-4b23-405e-afed-de5776d7a46e","Type":"ContainerDied","Data":"7c86db71eab66a95cd99adf9766be543240f94c3d2efb6c2c09e8b95b07bfa8c"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.164081 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" event={"ID":"0c5cca9a-4b23-405e-afed-de5776d7a46e","Type":"ContainerStarted","Data":"6f50e5f6a94882c05f8a43ae912ea74495aa7c294aecd4b326e76525f39e78b8"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.165771 4996 generic.go:334] "Generic (PLEG): container finished" podID="c6a7e7c0-e530-41f6-a62e-57f53bb376b8" containerID="8700d607250633fe65d7d4780548bd3013caf74ed57c222d1ddb17519c45634c" exitCode=0 Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.165833 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j68mh" event={"ID":"c6a7e7c0-e530-41f6-a62e-57f53bb376b8","Type":"ContainerDied","Data":"8700d607250633fe65d7d4780548bd3013caf74ed57c222d1ddb17519c45634c"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.165854 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j68mh" event={"ID":"c6a7e7c0-e530-41f6-a62e-57f53bb376b8","Type":"ContainerStarted","Data":"b3620583111741e794e77ee2f98a80df0034fa7c0d9baccb0dfdc2a6f9fbc131"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.172101 4996 generic.go:334] "Generic (PLEG): container finished" podID="9363ee4e-971c-4a87-9c13-a349d02ac678" containerID="7f8689ef60cb53d8d46d0698dcc448d0c0d425b7cc9d4243f66061c92be146bc" exitCode=0 Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.172177 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-815f-account-create-update-wg8b6" event={"ID":"9363ee4e-971c-4a87-9c13-a349d02ac678","Type":"ContainerDied","Data":"7f8689ef60cb53d8d46d0698dcc448d0c0d425b7cc9d4243f66061c92be146bc"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.172205 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-815f-account-create-update-wg8b6" event={"ID":"9363ee4e-971c-4a87-9c13-a349d02ac678","Type":"ContainerStarted","Data":"ba982ebbd0d9ebd02b222aa27cce2af8412d706551f366447c651cc808dcb6e3"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.173837 4996 generic.go:334] "Generic (PLEG): container finished" podID="9d4bc4b2-0222-49fc-995c-0b809d5e19fe" containerID="16bdb32eab1059351bc80a579d8edfca3cc2ad8818df424f78c453f17975d860" exitCode=0 Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.173876 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" event={"ID":"9d4bc4b2-0222-49fc-995c-0b809d5e19fe","Type":"ContainerDied","Data":"16bdb32eab1059351bc80a579d8edfca3cc2ad8818df424f78c453f17975d860"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.173891 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" event={"ID":"9d4bc4b2-0222-49fc-995c-0b809d5e19fe","Type":"ContainerStarted","Data":"94d3a7b33ac12d174bcff5d9d32d27f1339e7e6b12dce9fd09f23a07adae8e03"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.176423 4996 generic.go:334] "Generic (PLEG): container finished" podID="46352d7d-4e62-4a29-8814-a8e2e33ef813" containerID="65d722f6ed1409211b82ceadb6c0fab219b7eee4e3a6c876a8e17a30bc40a354" exitCode=0 Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.176568 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bg572" event={"ID":"46352d7d-4e62-4a29-8814-a8e2e33ef813","Type":"ContainerDied","Data":"65d722f6ed1409211b82ceadb6c0fab219b7eee4e3a6c876a8e17a30bc40a354"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.176601 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bg572" event={"ID":"46352d7d-4e62-4a29-8814-a8e2e33ef813","Type":"ContainerStarted","Data":"faadbfb41bd2735516f87f4b422f55469a116dc7f44f0dc5afc49c8cfcbd65db"} Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.178398 4996 generic.go:334] "Generic (PLEG): container finished" podID="9b3aa079-adbe-4f89-a7cb-7cece7b04a9d" containerID="b8e0af30f13c87d932e02a24fd301666746b44c74e03bad427b1c2bdf7b48bf9" exitCode=0 Feb 28 09:22:04 crc kubenswrapper[4996]: I0228 09:22:04.178437 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" event={"ID":"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d","Type":"ContainerDied","Data":"b8e0af30f13c87d932e02a24fd301666746b44c74e03bad427b1c2bdf7b48bf9"} Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.190485 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerStarted","Data":"63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca"} Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.191041 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="ceilometer-central-agent" containerID="cri-o://cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce" gracePeriod=30 Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.191304 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="sg-core" containerID="cri-o://0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db" gracePeriod=30 Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.191409 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="ceilometer-notification-agent" containerID="cri-o://4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822" gracePeriod=30 Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.191432 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="proxy-httpd" containerID="cri-o://63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca" gracePeriod=30 Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.193225 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.237321 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.805391512 podStartE2EDuration="5.237299346s" podCreationTimestamp="2026-02-28 09:22:00 +0000 UTC" firstStartedPulling="2026-02-28 09:22:01.008830763 +0000 UTC m=+1284.699633594" lastFinishedPulling="2026-02-28 09:22:04.440738617 +0000 UTC m=+1288.131541428" observedRunningTime="2026-02-28 09:22:05.232039539 +0000 UTC m=+1288.922842370" watchObservedRunningTime="2026-02-28 09:22:05.237299346 +0000 UTC m=+1288.928102167" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.688948 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.691174 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.726297 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.726486 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.733982 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.738655 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.752724 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.787195 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-operator-scripts\") pod \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\" (UID: \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.787533 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zn267\" (UniqueName: \"kubernetes.io/projected/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-kube-api-access-zn267\") pod \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\" (UID: \"9d4bc4b2-0222-49fc-995c-0b809d5e19fe\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.787741 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71586534-5889-4027-8688-9b5b3e3394ea-operator-scripts\") pod \"71586534-5889-4027-8688-9b5b3e3394ea\" (UID: \"71586534-5889-4027-8688-9b5b3e3394ea\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.787821 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwhmg\" (UniqueName: \"kubernetes.io/projected/71586534-5889-4027-8688-9b5b3e3394ea-kube-api-access-jwhmg\") pod \"71586534-5889-4027-8688-9b5b3e3394ea\" (UID: \"71586534-5889-4027-8688-9b5b3e3394ea\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.787737 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9d4bc4b2-0222-49fc-995c-0b809d5e19fe" (UID: "9d4bc4b2-0222-49fc-995c-0b809d5e19fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.788882 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71586534-5889-4027-8688-9b5b3e3394ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "71586534-5889-4027-8688-9b5b3e3394ea" (UID: "71586534-5889-4027-8688-9b5b3e3394ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.789464 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.789493 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/71586534-5889-4027-8688-9b5b3e3394ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.793246 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-kube-api-access-zn267" (OuterVolumeSpecName: "kube-api-access-zn267") pod "9d4bc4b2-0222-49fc-995c-0b809d5e19fe" (UID: "9d4bc4b2-0222-49fc-995c-0b809d5e19fe"). InnerVolumeSpecName "kube-api-access-zn267". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.793748 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71586534-5889-4027-8688-9b5b3e3394ea-kube-api-access-jwhmg" (OuterVolumeSpecName: "kube-api-access-jwhmg") pod "71586534-5889-4027-8688-9b5b3e3394ea" (UID: "71586534-5889-4027-8688-9b5b3e3394ea"). InnerVolumeSpecName "kube-api-access-jwhmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.890667 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctg7t\" (UniqueName: \"kubernetes.io/projected/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-kube-api-access-ctg7t\") pod \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\" (UID: \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.890717 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5cca9a-4b23-405e-afed-de5776d7a46e-operator-scripts\") pod \"0c5cca9a-4b23-405e-afed-de5776d7a46e\" (UID: \"0c5cca9a-4b23-405e-afed-de5776d7a46e\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.890814 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ptjh\" (UniqueName: \"kubernetes.io/projected/0c5cca9a-4b23-405e-afed-de5776d7a46e-kube-api-access-7ptjh\") pod \"0c5cca9a-4b23-405e-afed-de5776d7a46e\" (UID: \"0c5cca9a-4b23-405e-afed-de5776d7a46e\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.890833 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phvs9\" (UniqueName: \"kubernetes.io/projected/9b3aa079-adbe-4f89-a7cb-7cece7b04a9d-kube-api-access-phvs9\") pod \"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d\" (UID: \"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.890871 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46352d7d-4e62-4a29-8814-a8e2e33ef813-operator-scripts\") pod \"46352d7d-4e62-4a29-8814-a8e2e33ef813\" (UID: \"46352d7d-4e62-4a29-8814-a8e2e33ef813\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.890898 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-operator-scripts\") pod \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\" (UID: \"c6a7e7c0-e530-41f6-a62e-57f53bb376b8\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.891292 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46352d7d-4e62-4a29-8814-a8e2e33ef813-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46352d7d-4e62-4a29-8814-a8e2e33ef813" (UID: "46352d7d-4e62-4a29-8814-a8e2e33ef813"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.891335 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmkrl\" (UniqueName: \"kubernetes.io/projected/9363ee4e-971c-4a87-9c13-a349d02ac678-kube-api-access-nmkrl\") pod \"9363ee4e-971c-4a87-9c13-a349d02ac678\" (UID: \"9363ee4e-971c-4a87-9c13-a349d02ac678\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.891381 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm258\" (UniqueName: \"kubernetes.io/projected/46352d7d-4e62-4a29-8814-a8e2e33ef813-kube-api-access-jm258\") pod \"46352d7d-4e62-4a29-8814-a8e2e33ef813\" (UID: \"46352d7d-4e62-4a29-8814-a8e2e33ef813\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.891399 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9363ee4e-971c-4a87-9c13-a349d02ac678-operator-scripts\") pod \"9363ee4e-971c-4a87-9c13-a349d02ac678\" (UID: \"9363ee4e-971c-4a87-9c13-a349d02ac678\") " Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.891436 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6a7e7c0-e530-41f6-a62e-57f53bb376b8" (UID: "c6a7e7c0-e530-41f6-a62e-57f53bb376b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.891789 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c5cca9a-4b23-405e-afed-de5776d7a46e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c5cca9a-4b23-405e-afed-de5776d7a46e" (UID: "0c5cca9a-4b23-405e-afed-de5776d7a46e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.891924 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9363ee4e-971c-4a87-9c13-a349d02ac678-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9363ee4e-971c-4a87-9c13-a349d02ac678" (UID: "9363ee4e-971c-4a87-9c13-a349d02ac678"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.892405 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46352d7d-4e62-4a29-8814-a8e2e33ef813-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.892426 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.892436 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9363ee4e-971c-4a87-9c13-a349d02ac678-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.892445 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwhmg\" (UniqueName: \"kubernetes.io/projected/71586534-5889-4027-8688-9b5b3e3394ea-kube-api-access-jwhmg\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.892455 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c5cca9a-4b23-405e-afed-de5776d7a46e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.892464 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zn267\" (UniqueName: \"kubernetes.io/projected/9d4bc4b2-0222-49fc-995c-0b809d5e19fe-kube-api-access-zn267\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.894527 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-kube-api-access-ctg7t" (OuterVolumeSpecName: "kube-api-access-ctg7t") pod "c6a7e7c0-e530-41f6-a62e-57f53bb376b8" (UID: "c6a7e7c0-e530-41f6-a62e-57f53bb376b8"). InnerVolumeSpecName "kube-api-access-ctg7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.895262 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b3aa079-adbe-4f89-a7cb-7cece7b04a9d-kube-api-access-phvs9" (OuterVolumeSpecName: "kube-api-access-phvs9") pod "9b3aa079-adbe-4f89-a7cb-7cece7b04a9d" (UID: "9b3aa079-adbe-4f89-a7cb-7cece7b04a9d"). InnerVolumeSpecName "kube-api-access-phvs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.895449 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9363ee4e-971c-4a87-9c13-a349d02ac678-kube-api-access-nmkrl" (OuterVolumeSpecName: "kube-api-access-nmkrl") pod "9363ee4e-971c-4a87-9c13-a349d02ac678" (UID: "9363ee4e-971c-4a87-9c13-a349d02ac678"). InnerVolumeSpecName "kube-api-access-nmkrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.895493 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c5cca9a-4b23-405e-afed-de5776d7a46e-kube-api-access-7ptjh" (OuterVolumeSpecName: "kube-api-access-7ptjh") pod "0c5cca9a-4b23-405e-afed-de5776d7a46e" (UID: "0c5cca9a-4b23-405e-afed-de5776d7a46e"). InnerVolumeSpecName "kube-api-access-7ptjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.895838 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46352d7d-4e62-4a29-8814-a8e2e33ef813-kube-api-access-jm258" (OuterVolumeSpecName: "kube-api-access-jm258") pod "46352d7d-4e62-4a29-8814-a8e2e33ef813" (UID: "46352d7d-4e62-4a29-8814-a8e2e33ef813"). InnerVolumeSpecName "kube-api-access-jm258". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.994457 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmkrl\" (UniqueName: \"kubernetes.io/projected/9363ee4e-971c-4a87-9c13-a349d02ac678-kube-api-access-nmkrl\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.994490 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jm258\" (UniqueName: \"kubernetes.io/projected/46352d7d-4e62-4a29-8814-a8e2e33ef813-kube-api-access-jm258\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.994501 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctg7t\" (UniqueName: \"kubernetes.io/projected/c6a7e7c0-e530-41f6-a62e-57f53bb376b8-kube-api-access-ctg7t\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.994512 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ptjh\" (UniqueName: \"kubernetes.io/projected/0c5cca9a-4b23-405e-afed-de5776d7a46e-kube-api-access-7ptjh\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:05 crc kubenswrapper[4996]: I0228 09:22:05.994525 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phvs9\" (UniqueName: \"kubernetes.io/projected/9b3aa079-adbe-4f89-a7cb-7cece7b04a9d-kube-api-access-phvs9\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.214532 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-pm9cp" event={"ID":"71586534-5889-4027-8688-9b5b3e3394ea","Type":"ContainerDied","Data":"3d62a2a8b2413d96d2bac7ceb01a2a1c7f2ca82d5f07289414e3586ef1f80b69"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.214589 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d62a2a8b2413d96d2bac7ceb01a2a1c7f2ca82d5f07289414e3586ef1f80b69" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.214588 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-pm9cp" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.217638 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" event={"ID":"0c5cca9a-4b23-405e-afed-de5776d7a46e","Type":"ContainerDied","Data":"6f50e5f6a94882c05f8a43ae912ea74495aa7c294aecd4b326e76525f39e78b8"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.217775 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f50e5f6a94882c05f8a43ae912ea74495aa7c294aecd4b326e76525f39e78b8" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.217863 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3a07-account-create-update-8d4gz" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.222059 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-j68mh" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.222082 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-j68mh" event={"ID":"c6a7e7c0-e530-41f6-a62e-57f53bb376b8","Type":"ContainerDied","Data":"b3620583111741e794e77ee2f98a80df0034fa7c0d9baccb0dfdc2a6f9fbc131"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.222164 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3620583111741e794e77ee2f98a80df0034fa7c0d9baccb0dfdc2a6f9fbc131" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.236727 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-815f-account-create-update-wg8b6" event={"ID":"9363ee4e-971c-4a87-9c13-a349d02ac678","Type":"ContainerDied","Data":"ba982ebbd0d9ebd02b222aa27cce2af8412d706551f366447c651cc808dcb6e3"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.239816 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba982ebbd0d9ebd02b222aa27cce2af8412d706551f366447c651cc808dcb6e3" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.237786 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-815f-account-create-update-wg8b6" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.242043 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" event={"ID":"9d4bc4b2-0222-49fc-995c-0b809d5e19fe","Type":"ContainerDied","Data":"94d3a7b33ac12d174bcff5d9d32d27f1339e7e6b12dce9fd09f23a07adae8e03"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.242094 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94d3a7b33ac12d174bcff5d9d32d27f1339e7e6b12dce9fd09f23a07adae8e03" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.242195 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-9c05-account-create-update-wvb9n" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.246422 4996 generic.go:334] "Generic (PLEG): container finished" podID="86f3768c-1150-4715-b5ef-17cc1471697d" containerID="63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca" exitCode=0 Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.246460 4996 generic.go:334] "Generic (PLEG): container finished" podID="86f3768c-1150-4715-b5ef-17cc1471697d" containerID="0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db" exitCode=2 Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.246473 4996 generic.go:334] "Generic (PLEG): container finished" podID="86f3768c-1150-4715-b5ef-17cc1471697d" containerID="4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822" exitCode=0 Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.246547 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerDied","Data":"63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.246577 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerDied","Data":"0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.246593 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerDied","Data":"4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.251240 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bg572" event={"ID":"46352d7d-4e62-4a29-8814-a8e2e33ef813","Type":"ContainerDied","Data":"faadbfb41bd2735516f87f4b422f55469a116dc7f44f0dc5afc49c8cfcbd65db"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.251283 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faadbfb41bd2735516f87f4b422f55469a116dc7f44f0dc5afc49c8cfcbd65db" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.251375 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bg572" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.262436 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" event={"ID":"9b3aa079-adbe-4f89-a7cb-7cece7b04a9d","Type":"ContainerDied","Data":"b7b81beb8871bd7b8535d527510205a0b4e57f96c1aa727218eaf090fcd66941"} Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.262475 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7b81beb8871bd7b8535d527510205a0b4e57f96c1aa727218eaf090fcd66941" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.262594 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537842-lvg4x" Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.282095 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537836-ldstb"] Feb 28 09:22:06 crc kubenswrapper[4996]: I0228 09:22:06.306513 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537836-ldstb"] Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.044503 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbd90ac2-a781-47e5-9ee0-03514d3d2c99" path="/var/lib/kubelet/pods/dbd90ac2-a781-47e5-9ee0-03514d3d2c99/volumes" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.261642 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-679bcc7697-9hs5j" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.356261 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58bbf8b97d-2bk65"] Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.356815 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58bbf8b97d-2bk65" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerName="neutron-api" containerID="cri-o://4c1c68b34ee07c2f31465106cd4577c27aa1e33f204c0a55ffe4b78f3d9514c9" gracePeriod=30 Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.356853 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58bbf8b97d-2bk65" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerName="neutron-httpd" containerID="cri-o://6dac997efbef4dfeb46bf2cc75a8674846f13afc16332a4e091183026d62ee72" gracePeriod=30 Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702191 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwpcz"] Feb 28 09:22:07 crc kubenswrapper[4996]: E0228 09:22:07.702615 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b3aa079-adbe-4f89-a7cb-7cece7b04a9d" containerName="oc" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702634 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b3aa079-adbe-4f89-a7cb-7cece7b04a9d" containerName="oc" Feb 28 09:22:07 crc kubenswrapper[4996]: E0228 09:22:07.702645 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46352d7d-4e62-4a29-8814-a8e2e33ef813" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702655 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="46352d7d-4e62-4a29-8814-a8e2e33ef813" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: E0228 09:22:07.702674 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9363ee4e-971c-4a87-9c13-a349d02ac678" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702682 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9363ee4e-971c-4a87-9c13-a349d02ac678" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: E0228 09:22:07.702693 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d4bc4b2-0222-49fc-995c-0b809d5e19fe" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702701 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d4bc4b2-0222-49fc-995c-0b809d5e19fe" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: E0228 09:22:07.702716 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71586534-5889-4027-8688-9b5b3e3394ea" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702724 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="71586534-5889-4027-8688-9b5b3e3394ea" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: E0228 09:22:07.702735 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c5cca9a-4b23-405e-afed-de5776d7a46e" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702743 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c5cca9a-4b23-405e-afed-de5776d7a46e" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: E0228 09:22:07.702764 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6a7e7c0-e530-41f6-a62e-57f53bb376b8" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702773 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6a7e7c0-e530-41f6-a62e-57f53bb376b8" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702955 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c5cca9a-4b23-405e-afed-de5776d7a46e" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702969 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b3aa079-adbe-4f89-a7cb-7cece7b04a9d" containerName="oc" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702977 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9363ee4e-971c-4a87-9c13-a349d02ac678" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.702989 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6a7e7c0-e530-41f6-a62e-57f53bb376b8" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.703022 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d4bc4b2-0222-49fc-995c-0b809d5e19fe" containerName="mariadb-account-create-update" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.703036 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="46352d7d-4e62-4a29-8814-a8e2e33ef813" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.703051 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="71586534-5889-4027-8688-9b5b3e3394ea" containerName="mariadb-database-create" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.703701 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.705694 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.706118 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9g9kx" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.711539 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.725232 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwpcz"] Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.830608 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-config-data\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.830748 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwbrc\" (UniqueName: \"kubernetes.io/projected/d97aca1c-945f-4e22-aa03-667cc7345de5-kube-api-access-bwbrc\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.830790 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-scripts\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.830929 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.932221 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-config-data\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.932307 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwbrc\" (UniqueName: \"kubernetes.io/projected/d97aca1c-945f-4e22-aa03-667cc7345de5-kube-api-access-bwbrc\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.932345 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-scripts\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.932462 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.937809 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-scripts\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.938174 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-config-data\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.945700 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:07 crc kubenswrapper[4996]: I0228 09:22:07.948102 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwbrc\" (UniqueName: \"kubernetes.io/projected/d97aca1c-945f-4e22-aa03-667cc7345de5-kube-api-access-bwbrc\") pod \"nova-cell0-conductor-db-sync-gwpcz\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:08 crc kubenswrapper[4996]: I0228 09:22:08.026017 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:08 crc kubenswrapper[4996]: I0228 09:22:08.290441 4996 generic.go:334] "Generic (PLEG): container finished" podID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerID="6dac997efbef4dfeb46bf2cc75a8674846f13afc16332a4e091183026d62ee72" exitCode=0 Feb 28 09:22:08 crc kubenswrapper[4996]: I0228 09:22:08.290493 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58bbf8b97d-2bk65" event={"ID":"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e","Type":"ContainerDied","Data":"6dac997efbef4dfeb46bf2cc75a8674846f13afc16332a4e091183026d62ee72"} Feb 28 09:22:08 crc kubenswrapper[4996]: W0228 09:22:08.440540 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd97aca1c_945f_4e22_aa03_667cc7345de5.slice/crio-da090829f761098c12a4b58b44c1e20da2770befdb4a4aeed39cd4a3062893cc WatchSource:0}: Error finding container da090829f761098c12a4b58b44c1e20da2770befdb4a4aeed39cd4a3062893cc: Status 404 returned error can't find the container with id da090829f761098c12a4b58b44c1e20da2770befdb4a4aeed39cd4a3062893cc Feb 28 09:22:08 crc kubenswrapper[4996]: I0228 09:22:08.441755 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwpcz"] Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.290262 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.299234 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" event={"ID":"d97aca1c-945f-4e22-aa03-667cc7345de5","Type":"ContainerStarted","Data":"da090829f761098c12a4b58b44c1e20da2770befdb4a4aeed39cd4a3062893cc"} Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.303248 4996 generic.go:334] "Generic (PLEG): container finished" podID="86f3768c-1150-4715-b5ef-17cc1471697d" containerID="cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce" exitCode=0 Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.303287 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerDied","Data":"cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce"} Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.303308 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"86f3768c-1150-4715-b5ef-17cc1471697d","Type":"ContainerDied","Data":"862b3c75dc19d8b6f86bafbf76431e8e52ce7646e26502dfb1a1c2f9ef446594"} Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.303353 4996 scope.go:117] "RemoveContainer" containerID="63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.303498 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.339374 4996 scope.go:117] "RemoveContainer" containerID="0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.357544 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-combined-ca-bundle\") pod \"86f3768c-1150-4715-b5ef-17cc1471697d\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.357590 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-log-httpd\") pod \"86f3768c-1150-4715-b5ef-17cc1471697d\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.357658 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-run-httpd\") pod \"86f3768c-1150-4715-b5ef-17cc1471697d\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.357731 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-scripts\") pod \"86f3768c-1150-4715-b5ef-17cc1471697d\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.357749 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-sg-core-conf-yaml\") pod \"86f3768c-1150-4715-b5ef-17cc1471697d\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.357766 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmqf6\" (UniqueName: \"kubernetes.io/projected/86f3768c-1150-4715-b5ef-17cc1471697d-kube-api-access-mmqf6\") pod \"86f3768c-1150-4715-b5ef-17cc1471697d\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.357817 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-config-data\") pod \"86f3768c-1150-4715-b5ef-17cc1471697d\" (UID: \"86f3768c-1150-4715-b5ef-17cc1471697d\") " Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.358473 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "86f3768c-1150-4715-b5ef-17cc1471697d" (UID: "86f3768c-1150-4715-b5ef-17cc1471697d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.358745 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.359471 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "86f3768c-1150-4715-b5ef-17cc1471697d" (UID: "86f3768c-1150-4715-b5ef-17cc1471697d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.377592 4996 scope.go:117] "RemoveContainer" containerID="4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.377631 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-scripts" (OuterVolumeSpecName: "scripts") pod "86f3768c-1150-4715-b5ef-17cc1471697d" (UID: "86f3768c-1150-4715-b5ef-17cc1471697d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.377981 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f3768c-1150-4715-b5ef-17cc1471697d-kube-api-access-mmqf6" (OuterVolumeSpecName: "kube-api-access-mmqf6") pod "86f3768c-1150-4715-b5ef-17cc1471697d" (UID: "86f3768c-1150-4715-b5ef-17cc1471697d"). InnerVolumeSpecName "kube-api-access-mmqf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.397231 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "86f3768c-1150-4715-b5ef-17cc1471697d" (UID: "86f3768c-1150-4715-b5ef-17cc1471697d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.436138 4996 scope.go:117] "RemoveContainer" containerID="cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.448071 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86f3768c-1150-4715-b5ef-17cc1471697d" (UID: "86f3768c-1150-4715-b5ef-17cc1471697d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.461100 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.461132 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.461147 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmqf6\" (UniqueName: \"kubernetes.io/projected/86f3768c-1150-4715-b5ef-17cc1471697d-kube-api-access-mmqf6\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.461162 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.461174 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86f3768c-1150-4715-b5ef-17cc1471697d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.491990 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-config-data" (OuterVolumeSpecName: "config-data") pod "86f3768c-1150-4715-b5ef-17cc1471697d" (UID: "86f3768c-1150-4715-b5ef-17cc1471697d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.493995 4996 scope.go:117] "RemoveContainer" containerID="63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca" Feb 28 09:22:09 crc kubenswrapper[4996]: E0228 09:22:09.494408 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca\": container with ID starting with 63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca not found: ID does not exist" containerID="63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.494450 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca"} err="failed to get container status \"63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca\": rpc error: code = NotFound desc = could not find container \"63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca\": container with ID starting with 63dd6aee3854b983749c8504e38576373540efa9ba71af8a471c8395a63d67ca not found: ID does not exist" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.494476 4996 scope.go:117] "RemoveContainer" containerID="0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db" Feb 28 09:22:09 crc kubenswrapper[4996]: E0228 09:22:09.494848 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db\": container with ID starting with 0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db not found: ID does not exist" containerID="0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.494884 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db"} err="failed to get container status \"0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db\": rpc error: code = NotFound desc = could not find container \"0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db\": container with ID starting with 0feed98285670647d4ea513b117d5d9d3949a75b2236d642610f86941331d4db not found: ID does not exist" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.494912 4996 scope.go:117] "RemoveContainer" containerID="4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822" Feb 28 09:22:09 crc kubenswrapper[4996]: E0228 09:22:09.495208 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822\": container with ID starting with 4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822 not found: ID does not exist" containerID="4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.495312 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822"} err="failed to get container status \"4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822\": rpc error: code = NotFound desc = could not find container \"4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822\": container with ID starting with 4130ff514c8a92201196063e6eb80294d629f1e280cd6caa815de397dd71e822 not found: ID does not exist" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.495386 4996 scope.go:117] "RemoveContainer" containerID="cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce" Feb 28 09:22:09 crc kubenswrapper[4996]: E0228 09:22:09.495740 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce\": container with ID starting with cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce not found: ID does not exist" containerID="cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.495817 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce"} err="failed to get container status \"cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce\": rpc error: code = NotFound desc = could not find container \"cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce\": container with ID starting with cc7db4daadc14849694d1c36280f814f7f134dd785224aa38163cbda8691ecce not found: ID does not exist" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.563531 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f3768c-1150-4715-b5ef-17cc1471697d-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.646501 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.654201 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668128 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:09 crc kubenswrapper[4996]: E0228 09:22:09.668468 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="ceilometer-central-agent" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668482 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="ceilometer-central-agent" Feb 28 09:22:09 crc kubenswrapper[4996]: E0228 09:22:09.668495 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="proxy-httpd" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668501 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="proxy-httpd" Feb 28 09:22:09 crc kubenswrapper[4996]: E0228 09:22:09.668524 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="ceilometer-notification-agent" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668531 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="ceilometer-notification-agent" Feb 28 09:22:09 crc kubenswrapper[4996]: E0228 09:22:09.668547 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="sg-core" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668552 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="sg-core" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668726 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="sg-core" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668742 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="ceilometer-notification-agent" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668752 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="proxy-httpd" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.668763 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" containerName="ceilometer-central-agent" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.670221 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.672630 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.672810 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.701126 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.766712 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-log-httpd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.766764 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-config-data\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.766832 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-run-httpd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.766884 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.766959 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.766995 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8nmd\" (UniqueName: \"kubernetes.io/projected/d295f8c1-00c6-4896-bd35-9b0ed488a002-kube-api-access-b8nmd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.767048 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-scripts\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.869049 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.869416 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8nmd\" (UniqueName: \"kubernetes.io/projected/d295f8c1-00c6-4896-bd35-9b0ed488a002-kube-api-access-b8nmd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.869460 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-scripts\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.869532 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-log-httpd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.869555 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-config-data\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.869615 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-run-httpd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.869671 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.871026 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-log-httpd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.871128 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-run-httpd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.875654 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.876030 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-config-data\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.884694 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-scripts\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.884751 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:09 crc kubenswrapper[4996]: I0228 09:22:09.893331 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8nmd\" (UniqueName: \"kubernetes.io/projected/d295f8c1-00c6-4896-bd35-9b0ed488a002-kube-api-access-b8nmd\") pod \"ceilometer-0\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " pod="openstack/ceilometer-0" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.002513 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.136052 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.279651 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-scripts\") pod \"9dcd95e8-c193-47ef-bc21-acabccfcff53\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.279802 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-tls-certs\") pod \"9dcd95e8-c193-47ef-bc21-acabccfcff53\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.279854 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-secret-key\") pod \"9dcd95e8-c193-47ef-bc21-acabccfcff53\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.279959 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-config-data\") pod \"9dcd95e8-c193-47ef-bc21-acabccfcff53\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.280060 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dcd95e8-c193-47ef-bc21-acabccfcff53-logs\") pod \"9dcd95e8-c193-47ef-bc21-acabccfcff53\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.280403 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m49v\" (UniqueName: \"kubernetes.io/projected/9dcd95e8-c193-47ef-bc21-acabccfcff53-kube-api-access-6m49v\") pod \"9dcd95e8-c193-47ef-bc21-acabccfcff53\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.280469 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-combined-ca-bundle\") pod \"9dcd95e8-c193-47ef-bc21-acabccfcff53\" (UID: \"9dcd95e8-c193-47ef-bc21-acabccfcff53\") " Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.284048 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dcd95e8-c193-47ef-bc21-acabccfcff53-logs" (OuterVolumeSpecName: "logs") pod "9dcd95e8-c193-47ef-bc21-acabccfcff53" (UID: "9dcd95e8-c193-47ef-bc21-acabccfcff53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.294092 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9dcd95e8-c193-47ef-bc21-acabccfcff53" (UID: "9dcd95e8-c193-47ef-bc21-acabccfcff53"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.294185 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dcd95e8-c193-47ef-bc21-acabccfcff53-kube-api-access-6m49v" (OuterVolumeSpecName: "kube-api-access-6m49v") pod "9dcd95e8-c193-47ef-bc21-acabccfcff53" (UID: "9dcd95e8-c193-47ef-bc21-acabccfcff53"). InnerVolumeSpecName "kube-api-access-6m49v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.308784 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-scripts" (OuterVolumeSpecName: "scripts") pod "9dcd95e8-c193-47ef-bc21-acabccfcff53" (UID: "9dcd95e8-c193-47ef-bc21-acabccfcff53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.311395 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dcd95e8-c193-47ef-bc21-acabccfcff53" (UID: "9dcd95e8-c193-47ef-bc21-acabccfcff53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.342105 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9dcd95e8-c193-47ef-bc21-acabccfcff53" (UID: "9dcd95e8-c193-47ef-bc21-acabccfcff53"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.363148 4996 generic.go:334] "Generic (PLEG): container finished" podID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerID="d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a" exitCode=137 Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.363212 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cc9dfcd4-m6gv8" event={"ID":"9dcd95e8-c193-47ef-bc21-acabccfcff53","Type":"ContainerDied","Data":"d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a"} Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.363238 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55cc9dfcd4-m6gv8" event={"ID":"9dcd95e8-c193-47ef-bc21-acabccfcff53","Type":"ContainerDied","Data":"7640b0cc663aeb3d7450d0d4559a4a81c8e92821577e1a8d946e06be1fd417f9"} Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.363254 4996 scope.go:117] "RemoveContainer" containerID="439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.363348 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55cc9dfcd4-m6gv8" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.368869 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-config-data" (OuterVolumeSpecName: "config-data") pod "9dcd95e8-c193-47ef-bc21-acabccfcff53" (UID: "9dcd95e8-c193-47ef-bc21-acabccfcff53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.386532 4996 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.386866 4996 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.386951 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.387053 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dcd95e8-c193-47ef-bc21-acabccfcff53-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.387139 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m49v\" (UniqueName: \"kubernetes.io/projected/9dcd95e8-c193-47ef-bc21-acabccfcff53-kube-api-access-6m49v\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.387216 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dcd95e8-c193-47ef-bc21-acabccfcff53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.387290 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9dcd95e8-c193-47ef-bc21-acabccfcff53-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.559699 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.586654 4996 scope.go:117] "RemoveContainer" containerID="d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.586898 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.684064 4996 scope.go:117] "RemoveContainer" containerID="439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284" Feb 28 09:22:10 crc kubenswrapper[4996]: E0228 09:22:10.684559 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284\": container with ID starting with 439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284 not found: ID does not exist" containerID="439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.684591 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284"} err="failed to get container status \"439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284\": rpc error: code = NotFound desc = could not find container \"439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284\": container with ID starting with 439b242a17345c7a1fe50b7e7f6635ef680bfe385dde1858f1c9ffde313b0284 not found: ID does not exist" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.684613 4996 scope.go:117] "RemoveContainer" containerID="d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a" Feb 28 09:22:10 crc kubenswrapper[4996]: E0228 09:22:10.684917 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a\": container with ID starting with d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a not found: ID does not exist" containerID="d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.684942 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a"} err="failed to get container status \"d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a\": rpc error: code = NotFound desc = could not find container \"d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a\": container with ID starting with d55922e567383c8b40fc433f21cf88e0abb38b56cd4135ec77f2e5789014799a not found: ID does not exist" Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.723827 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55cc9dfcd4-m6gv8"] Feb 28 09:22:10 crc kubenswrapper[4996]: I0228 09:22:10.733750 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55cc9dfcd4-m6gv8"] Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.046219 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f3768c-1150-4715-b5ef-17cc1471697d" path="/var/lib/kubelet/pods/86f3768c-1150-4715-b5ef-17cc1471697d/volumes" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.047487 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" path="/var/lib/kubelet/pods/9dcd95e8-c193-47ef-bc21-acabccfcff53/volumes" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.390506 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerStarted","Data":"dd844cc5e4bbc02a8302f58ce2cddd767150a162310bc105570ea73765404cbb"} Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.406589 4996 generic.go:334] "Generic (PLEG): container finished" podID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerID="4c1c68b34ee07c2f31465106cd4577c27aa1e33f204c0a55ffe4b78f3d9514c9" exitCode=0 Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.406661 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58bbf8b97d-2bk65" event={"ID":"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e","Type":"ContainerDied","Data":"4c1c68b34ee07c2f31465106cd4577c27aa1e33f204c0a55ffe4b78f3d9514c9"} Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.706841 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.768670 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.816240 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-config\") pod \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.816386 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-combined-ca-bundle\") pod \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.816440 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-ovndb-tls-certs\") pod \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.816513 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-httpd-config\") pod \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.816920 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shpvl\" (UniqueName: \"kubernetes.io/projected/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-kube-api-access-shpvl\") pod \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\" (UID: \"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e\") " Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.832155 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" (UID: "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.848619 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-kube-api-access-shpvl" (OuterVolumeSpecName: "kube-api-access-shpvl") pod "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" (UID: "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e"). InnerVolumeSpecName "kube-api-access-shpvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.893435 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" (UID: "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.896626 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-config" (OuterVolumeSpecName: "config") pod "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" (UID: "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.903210 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" (UID: "0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.922035 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.922070 4996 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.922080 4996 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.922089 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shpvl\" (UniqueName: \"kubernetes.io/projected/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-kube-api-access-shpvl\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.922099 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:11 crc kubenswrapper[4996]: I0228 09:22:11.938348 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-594c4f7c44-lnbrv" Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.017493 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c54f964b4-2pr6w"] Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.019230 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-c54f964b4-2pr6w" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerName="placement-log" containerID="cri-o://51098c0364485dd40a4a50cfc6729823f2c3411e9f760bff229ecf1ce961a278" gracePeriod=30 Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.019630 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-c54f964b4-2pr6w" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerName="placement-api" containerID="cri-o://847717fb96d7153b1ea89ea7c4cb33a2cdc4173d05d5e4bb193f470609c8d7af" gracePeriod=30 Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.419763 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58bbf8b97d-2bk65" event={"ID":"0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e","Type":"ContainerDied","Data":"801b702939741d7fcc43af4d633731fcfb36e18748b04db60b21fab2bb0e8e24"} Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.419806 4996 scope.go:117] "RemoveContainer" containerID="6dac997efbef4dfeb46bf2cc75a8674846f13afc16332a4e091183026d62ee72" Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.419811 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58bbf8b97d-2bk65" Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.422836 4996 generic.go:334] "Generic (PLEG): container finished" podID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerID="51098c0364485dd40a4a50cfc6729823f2c3411e9f760bff229ecf1ce961a278" exitCode=143 Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.422890 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c54f964b4-2pr6w" event={"ID":"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9","Type":"ContainerDied","Data":"51098c0364485dd40a4a50cfc6729823f2c3411e9f760bff229ecf1ce961a278"} Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.432434 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerStarted","Data":"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd"} Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.432480 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerStarted","Data":"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484"} Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.456515 4996 scope.go:117] "RemoveContainer" containerID="4c1c68b34ee07c2f31465106cd4577c27aa1e33f204c0a55ffe4b78f3d9514c9" Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.458698 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58bbf8b97d-2bk65"] Feb 28 09:22:12 crc kubenswrapper[4996]: I0228 09:22:12.471897 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58bbf8b97d-2bk65"] Feb 28 09:22:13 crc kubenswrapper[4996]: I0228 09:22:13.048897 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" path="/var/lib/kubelet/pods/0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e/volumes" Feb 28 09:22:13 crc kubenswrapper[4996]: I0228 09:22:13.443234 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerStarted","Data":"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418"} Feb 28 09:22:15 crc kubenswrapper[4996]: I0228 09:22:15.468528 4996 generic.go:334] "Generic (PLEG): container finished" podID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerID="847717fb96d7153b1ea89ea7c4cb33a2cdc4173d05d5e4bb193f470609c8d7af" exitCode=0 Feb 28 09:22:15 crc kubenswrapper[4996]: I0228 09:22:15.468578 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c54f964b4-2pr6w" event={"ID":"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9","Type":"ContainerDied","Data":"847717fb96d7153b1ea89ea7c4cb33a2cdc4173d05d5e4bb193f470609c8d7af"} Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.806900 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.856828 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-scripts\") pod \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.856948 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-public-tls-certs\") pod \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.856989 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-combined-ca-bundle\") pod \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.857104 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-config-data\") pod \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.857129 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-logs\") pod \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.857145 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-internal-tls-certs\") pod \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.857197 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qw2b4\" (UniqueName: \"kubernetes.io/projected/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-kube-api-access-qw2b4\") pod \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\" (UID: \"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9\") " Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.857648 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-logs" (OuterVolumeSpecName: "logs") pod "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" (UID: "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.858075 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.862736 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-kube-api-access-qw2b4" (OuterVolumeSpecName: "kube-api-access-qw2b4") pod "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" (UID: "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9"). InnerVolumeSpecName "kube-api-access-qw2b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.866918 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-scripts" (OuterVolumeSpecName: "scripts") pod "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" (UID: "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.917227 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-config-data" (OuterVolumeSpecName: "config-data") pod "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" (UID: "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.929533 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" (UID: "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.943867 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" (UID: "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.959960 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qw2b4\" (UniqueName: \"kubernetes.io/projected/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-kube-api-access-qw2b4\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.959993 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.960150 4996 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.960163 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.960173 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:18 crc kubenswrapper[4996]: I0228 09:22:18.964965 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" (UID: "115e68d8-a2a1-4c21-ae7f-2ec4e47855f9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.062494 4996 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.528092 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerStarted","Data":"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd"} Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.528220 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.528190 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="ceilometer-central-agent" containerID="cri-o://731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484" gracePeriod=30 Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.528305 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="proxy-httpd" containerID="cri-o://3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd" gracePeriod=30 Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.528353 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="sg-core" containerID="cri-o://e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418" gracePeriod=30 Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.528391 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="ceilometer-notification-agent" containerID="cri-o://96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd" gracePeriod=30 Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.533240 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" event={"ID":"d97aca1c-945f-4e22-aa03-667cc7345de5","Type":"ContainerStarted","Data":"94475a765872defb9164e91cdd045fdfdefd13e8f3bf40f3ff1152b0cd9115fd"} Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.536643 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c54f964b4-2pr6w" event={"ID":"115e68d8-a2a1-4c21-ae7f-2ec4e47855f9","Type":"ContainerDied","Data":"b185a452bc5dac613118701892db6278ac29b6cb0c7b724ddabd0b72bbe465ba"} Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.536683 4996 scope.go:117] "RemoveContainer" containerID="847717fb96d7153b1ea89ea7c4cb33a2cdc4173d05d5e4bb193f470609c8d7af" Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.536836 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c54f964b4-2pr6w" Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.562825 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.634699019 podStartE2EDuration="10.562772336s" podCreationTimestamp="2026-02-28 09:22:09 +0000 UTC" firstStartedPulling="2026-02-28 09:22:10.596106849 +0000 UTC m=+1294.286909660" lastFinishedPulling="2026-02-28 09:22:18.524180146 +0000 UTC m=+1302.214982977" observedRunningTime="2026-02-28 09:22:19.55014397 +0000 UTC m=+1303.240946811" watchObservedRunningTime="2026-02-28 09:22:19.562772336 +0000 UTC m=+1303.253575157" Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.589239 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c54f964b4-2pr6w"] Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.591941 4996 scope.go:117] "RemoveContainer" containerID="51098c0364485dd40a4a50cfc6729823f2c3411e9f760bff229ecf1ce961a278" Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.603034 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c54f964b4-2pr6w"] Feb 28 09:22:19 crc kubenswrapper[4996]: I0228 09:22:19.608438 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" podStartSLOduration=2.508392651 podStartE2EDuration="12.608420931s" podCreationTimestamp="2026-02-28 09:22:07 +0000 UTC" firstStartedPulling="2026-02-28 09:22:08.442615203 +0000 UTC m=+1292.133418014" lastFinishedPulling="2026-02-28 09:22:18.542643483 +0000 UTC m=+1302.233446294" observedRunningTime="2026-02-28 09:22:19.596993674 +0000 UTC m=+1303.287796495" watchObservedRunningTime="2026-02-28 09:22:19.608420931 +0000 UTC m=+1303.299223742" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.210105 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.286352 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-sg-core-conf-yaml\") pod \"d295f8c1-00c6-4896-bd35-9b0ed488a002\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.286422 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8nmd\" (UniqueName: \"kubernetes.io/projected/d295f8c1-00c6-4896-bd35-9b0ed488a002-kube-api-access-b8nmd\") pod \"d295f8c1-00c6-4896-bd35-9b0ed488a002\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.286470 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-scripts\") pod \"d295f8c1-00c6-4896-bd35-9b0ed488a002\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.286577 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-run-httpd\") pod \"d295f8c1-00c6-4896-bd35-9b0ed488a002\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.286721 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-combined-ca-bundle\") pod \"d295f8c1-00c6-4896-bd35-9b0ed488a002\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.286775 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-config-data\") pod \"d295f8c1-00c6-4896-bd35-9b0ed488a002\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.286836 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-log-httpd\") pod \"d295f8c1-00c6-4896-bd35-9b0ed488a002\" (UID: \"d295f8c1-00c6-4896-bd35-9b0ed488a002\") " Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.287339 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d295f8c1-00c6-4896-bd35-9b0ed488a002" (UID: "d295f8c1-00c6-4896-bd35-9b0ed488a002"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.287490 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.287804 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d295f8c1-00c6-4896-bd35-9b0ed488a002" (UID: "d295f8c1-00c6-4896-bd35-9b0ed488a002"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.291889 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-scripts" (OuterVolumeSpecName: "scripts") pod "d295f8c1-00c6-4896-bd35-9b0ed488a002" (UID: "d295f8c1-00c6-4896-bd35-9b0ed488a002"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.291993 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d295f8c1-00c6-4896-bd35-9b0ed488a002-kube-api-access-b8nmd" (OuterVolumeSpecName: "kube-api-access-b8nmd") pod "d295f8c1-00c6-4896-bd35-9b0ed488a002" (UID: "d295f8c1-00c6-4896-bd35-9b0ed488a002"). InnerVolumeSpecName "kube-api-access-b8nmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.337929 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d295f8c1-00c6-4896-bd35-9b0ed488a002" (UID: "d295f8c1-00c6-4896-bd35-9b0ed488a002"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.385168 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d295f8c1-00c6-4896-bd35-9b0ed488a002" (UID: "d295f8c1-00c6-4896-bd35-9b0ed488a002"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.389366 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.389410 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8nmd\" (UniqueName: \"kubernetes.io/projected/d295f8c1-00c6-4896-bd35-9b0ed488a002-kube-api-access-b8nmd\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.389425 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.389437 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.389448 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d295f8c1-00c6-4896-bd35-9b0ed488a002-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.393583 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-config-data" (OuterVolumeSpecName: "config-data") pod "d295f8c1-00c6-4896-bd35-9b0ed488a002" (UID: "d295f8c1-00c6-4896-bd35-9b0ed488a002"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.498216 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d295f8c1-00c6-4896-bd35-9b0ed488a002-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555034 4996 generic.go:334] "Generic (PLEG): container finished" podID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerID="3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd" exitCode=0 Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555270 4996 generic.go:334] "Generic (PLEG): container finished" podID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerID="e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418" exitCode=2 Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555394 4996 generic.go:334] "Generic (PLEG): container finished" podID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerID="96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd" exitCode=0 Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555490 4996 generic.go:334] "Generic (PLEG): container finished" podID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerID="731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484" exitCode=0 Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555090 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerDied","Data":"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd"} Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555113 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555692 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerDied","Data":"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418"} Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555816 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerDied","Data":"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd"} Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555856 4996 scope.go:117] "RemoveContainer" containerID="3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555867 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerDied","Data":"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484"} Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.555982 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d295f8c1-00c6-4896-bd35-9b0ed488a002","Type":"ContainerDied","Data":"dd844cc5e4bbc02a8302f58ce2cddd767150a162310bc105570ea73765404cbb"} Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.576552 4996 scope.go:117] "RemoveContainer" containerID="e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.594759 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.603890 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.610145 4996 scope.go:117] "RemoveContainer" containerID="96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.635800 4996 scope.go:117] "RemoveContainer" containerID="731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646082 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646578 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerName="neutron-httpd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646599 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerName="neutron-httpd" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646614 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="sg-core" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646623 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="sg-core" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646641 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="proxy-httpd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646649 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="proxy-httpd" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646667 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerName="placement-log" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646675 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerName="placement-log" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646685 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon-log" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646692 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon-log" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646708 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646715 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646735 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerName="placement-api" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646743 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerName="placement-api" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646754 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="ceilometer-central-agent" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646761 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="ceilometer-central-agent" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646775 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerName="neutron-api" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646783 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerName="neutron-api" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.646798 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="ceilometer-notification-agent" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646805 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="ceilometer-notification-agent" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.646993 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="proxy-httpd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647030 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="ceilometer-notification-agent" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647045 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerName="neutron-api" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647061 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerName="placement-log" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647077 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b6ecc3c-f37f-4ae1-ad98-b71af059ca5e" containerName="neutron-httpd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647087 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="ceilometer-central-agent" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647101 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon-log" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647114 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" containerName="placement-api" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647126 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcd95e8-c193-47ef-bc21-acabccfcff53" containerName="horizon" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.647140 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" containerName="sg-core" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.648905 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.656966 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.678343 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.689371 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.698905 4996 scope.go:117] "RemoveContainer" containerID="3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.699425 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": container with ID starting with 3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd not found: ID does not exist" containerID="3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.699464 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd"} err="failed to get container status \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": rpc error: code = NotFound desc = could not find container \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": container with ID starting with 3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.699486 4996 scope.go:117] "RemoveContainer" containerID="e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.699875 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": container with ID starting with e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418 not found: ID does not exist" containerID="e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.699894 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418"} err="failed to get container status \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": rpc error: code = NotFound desc = could not find container \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": container with ID starting with e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418 not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.699911 4996 scope.go:117] "RemoveContainer" containerID="96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.705287 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": container with ID starting with 96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd not found: ID does not exist" containerID="96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.705359 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd"} err="failed to get container status \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": rpc error: code = NotFound desc = could not find container \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": container with ID starting with 96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.705395 4996 scope.go:117] "RemoveContainer" containerID="731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484" Feb 28 09:22:20 crc kubenswrapper[4996]: E0228 09:22:20.705827 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": container with ID starting with 731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484 not found: ID does not exist" containerID="731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.705863 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484"} err="failed to get container status \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": rpc error: code = NotFound desc = could not find container \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": container with ID starting with 731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484 not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.705885 4996 scope.go:117] "RemoveContainer" containerID="3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706109 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd"} err="failed to get container status \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": rpc error: code = NotFound desc = could not find container \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": container with ID starting with 3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706125 4996 scope.go:117] "RemoveContainer" containerID="e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706283 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418"} err="failed to get container status \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": rpc error: code = NotFound desc = could not find container \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": container with ID starting with e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418 not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706297 4996 scope.go:117] "RemoveContainer" containerID="96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706446 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd"} err="failed to get container status \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": rpc error: code = NotFound desc = could not find container \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": container with ID starting with 96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706459 4996 scope.go:117] "RemoveContainer" containerID="731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706606 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484"} err="failed to get container status \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": rpc error: code = NotFound desc = could not find container \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": container with ID starting with 731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484 not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706625 4996 scope.go:117] "RemoveContainer" containerID="3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706776 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd"} err="failed to get container status \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": rpc error: code = NotFound desc = could not find container \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": container with ID starting with 3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706789 4996 scope.go:117] "RemoveContainer" containerID="e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706935 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418"} err="failed to get container status \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": rpc error: code = NotFound desc = could not find container \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": container with ID starting with e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418 not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.706959 4996 scope.go:117] "RemoveContainer" containerID="96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707130 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd"} err="failed to get container status \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": rpc error: code = NotFound desc = could not find container \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": container with ID starting with 96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707143 4996 scope.go:117] "RemoveContainer" containerID="731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707325 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484"} err="failed to get container status \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": rpc error: code = NotFound desc = could not find container \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": container with ID starting with 731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484 not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707340 4996 scope.go:117] "RemoveContainer" containerID="3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707488 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd"} err="failed to get container status \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": rpc error: code = NotFound desc = could not find container \"3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd\": container with ID starting with 3bd485e97a64767a310a2a078cbfea0b95649681d82a4c6088f475d005116cbd not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707500 4996 scope.go:117] "RemoveContainer" containerID="e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707644 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418"} err="failed to get container status \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": rpc error: code = NotFound desc = could not find container \"e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418\": container with ID starting with e76cb212bc5602ac9dc61838b5c152ea200579939b69826fd3157e917733f418 not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707657 4996 scope.go:117] "RemoveContainer" containerID="96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707800 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd"} err="failed to get container status \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": rpc error: code = NotFound desc = could not find container \"96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd\": container with ID starting with 96e09f58dbd6f37e269f1788233b54a0fd52353fbb08c57ad24d9351dd0042dd not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.707814 4996 scope.go:117] "RemoveContainer" containerID="731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.708052 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484"} err="failed to get container status \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": rpc error: code = NotFound desc = could not find container \"731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484\": container with ID starting with 731b3225350042735ba857426a275314ecbc605e8eb06f73c529a66cf1fec484 not found: ID does not exist" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.708141 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-scripts\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.708205 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.708268 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-config-data\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.708299 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-run-httpd\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.708425 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.708579 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nr8x8\" (UniqueName: \"kubernetes.io/projected/14cfcdb9-54f5-466b-ae1e-049f26c69468-kube-api-access-nr8x8\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.708610 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-log-httpd\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.810306 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nr8x8\" (UniqueName: \"kubernetes.io/projected/14cfcdb9-54f5-466b-ae1e-049f26c69468-kube-api-access-nr8x8\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.810356 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-log-httpd\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.810396 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-scripts\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.810417 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.810443 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-config-data\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.810460 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-run-httpd\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.810601 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.810961 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-log-httpd\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.811081 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-run-httpd\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.816539 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.816965 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-scripts\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.820073 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.828441 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nr8x8\" (UniqueName: \"kubernetes.io/projected/14cfcdb9-54f5-466b-ae1e-049f26c69468-kube-api-access-nr8x8\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.828580 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-config-data\") pod \"ceilometer-0\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " pod="openstack/ceilometer-0" Feb 28 09:22:20 crc kubenswrapper[4996]: I0228 09:22:20.998551 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:21 crc kubenswrapper[4996]: I0228 09:22:21.047834 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115e68d8-a2a1-4c21-ae7f-2ec4e47855f9" path="/var/lib/kubelet/pods/115e68d8-a2a1-4c21-ae7f-2ec4e47855f9/volumes" Feb 28 09:22:21 crc kubenswrapper[4996]: I0228 09:22:21.049066 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d295f8c1-00c6-4896-bd35-9b0ed488a002" path="/var/lib/kubelet/pods/d295f8c1-00c6-4896-bd35-9b0ed488a002/volumes" Feb 28 09:22:21 crc kubenswrapper[4996]: I0228 09:22:21.458599 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:21 crc kubenswrapper[4996]: I0228 09:22:21.566693 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerStarted","Data":"c1cb375a29daf84f841b93d43f4681b6895f18fd9573933c60be98c67be6a253"} Feb 28 09:22:22 crc kubenswrapper[4996]: I0228 09:22:22.577397 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerStarted","Data":"9b923ae7265adc672deabe2fe79d1509b4350b7a05b7cb00f64f7941fa9d76bc"} Feb 28 09:22:23 crc kubenswrapper[4996]: I0228 09:22:23.589799 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerStarted","Data":"4102d8ccb9e4d1c35ad877d69d9177f3e97075ccf9d3d6ce8cf828ae1ddf9b58"} Feb 28 09:22:23 crc kubenswrapper[4996]: I0228 09:22:23.590147 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerStarted","Data":"a89fb8d93fd0ec37bb7389e9a8d2f570deca553e0352f8c6d56e96de306c559c"} Feb 28 09:22:24 crc kubenswrapper[4996]: I0228 09:22:24.970936 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:26 crc kubenswrapper[4996]: I0228 09:22:26.656604 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerStarted","Data":"87eecc751bd6a85fd53cae97f58a8208ed7512844a56460950967937c1f9d3a3"} Feb 28 09:22:26 crc kubenswrapper[4996]: I0228 09:22:26.657292 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="ceilometer-central-agent" containerID="cri-o://9b923ae7265adc672deabe2fe79d1509b4350b7a05b7cb00f64f7941fa9d76bc" gracePeriod=30 Feb 28 09:22:26 crc kubenswrapper[4996]: I0228 09:22:26.657560 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:22:26 crc kubenswrapper[4996]: I0228 09:22:26.657811 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="proxy-httpd" containerID="cri-o://87eecc751bd6a85fd53cae97f58a8208ed7512844a56460950967937c1f9d3a3" gracePeriod=30 Feb 28 09:22:26 crc kubenswrapper[4996]: I0228 09:22:26.657855 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="sg-core" containerID="cri-o://4102d8ccb9e4d1c35ad877d69d9177f3e97075ccf9d3d6ce8cf828ae1ddf9b58" gracePeriod=30 Feb 28 09:22:26 crc kubenswrapper[4996]: I0228 09:22:26.657886 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="ceilometer-notification-agent" containerID="cri-o://a89fb8d93fd0ec37bb7389e9a8d2f570deca553e0352f8c6d56e96de306c559c" gracePeriod=30 Feb 28 09:22:26 crc kubenswrapper[4996]: I0228 09:22:26.682605 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7293444449999997 podStartE2EDuration="6.682590282s" podCreationTimestamp="2026-02-28 09:22:20 +0000 UTC" firstStartedPulling="2026-02-28 09:22:21.473995186 +0000 UTC m=+1305.164798007" lastFinishedPulling="2026-02-28 09:22:25.427241033 +0000 UTC m=+1309.118043844" observedRunningTime="2026-02-28 09:22:26.680973593 +0000 UTC m=+1310.371776444" watchObservedRunningTime="2026-02-28 09:22:26.682590282 +0000 UTC m=+1310.373393093" Feb 28 09:22:26 crc kubenswrapper[4996]: E0228 09:22:26.937157 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14cfcdb9_54f5_466b_ae1e_049f26c69468.slice/crio-conmon-87eecc751bd6a85fd53cae97f58a8208ed7512844a56460950967937c1f9d3a3.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.674429 4996 generic.go:334] "Generic (PLEG): container finished" podID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerID="87eecc751bd6a85fd53cae97f58a8208ed7512844a56460950967937c1f9d3a3" exitCode=0 Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.675223 4996 generic.go:334] "Generic (PLEG): container finished" podID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerID="4102d8ccb9e4d1c35ad877d69d9177f3e97075ccf9d3d6ce8cf828ae1ddf9b58" exitCode=2 Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.675237 4996 generic.go:334] "Generic (PLEG): container finished" podID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerID="a89fb8d93fd0ec37bb7389e9a8d2f570deca553e0352f8c6d56e96de306c559c" exitCode=0 Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.675247 4996 generic.go:334] "Generic (PLEG): container finished" podID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerID="9b923ae7265adc672deabe2fe79d1509b4350b7a05b7cb00f64f7941fa9d76bc" exitCode=0 Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.674510 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerDied","Data":"87eecc751bd6a85fd53cae97f58a8208ed7512844a56460950967937c1f9d3a3"} Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.675280 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerDied","Data":"4102d8ccb9e4d1c35ad877d69d9177f3e97075ccf9d3d6ce8cf828ae1ddf9b58"} Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.675291 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerDied","Data":"a89fb8d93fd0ec37bb7389e9a8d2f570deca553e0352f8c6d56e96de306c559c"} Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.675301 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerDied","Data":"9b923ae7265adc672deabe2fe79d1509b4350b7a05b7cb00f64f7941fa9d76bc"} Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.734508 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.835149 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-scripts\") pod \"14cfcdb9-54f5-466b-ae1e-049f26c69468\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.835282 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-combined-ca-bundle\") pod \"14cfcdb9-54f5-466b-ae1e-049f26c69468\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.835338 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-log-httpd\") pod \"14cfcdb9-54f5-466b-ae1e-049f26c69468\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.835435 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-run-httpd\") pod \"14cfcdb9-54f5-466b-ae1e-049f26c69468\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.835456 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-sg-core-conf-yaml\") pod \"14cfcdb9-54f5-466b-ae1e-049f26c69468\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.835495 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nr8x8\" (UniqueName: \"kubernetes.io/projected/14cfcdb9-54f5-466b-ae1e-049f26c69468-kube-api-access-nr8x8\") pod \"14cfcdb9-54f5-466b-ae1e-049f26c69468\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.835519 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-config-data\") pod \"14cfcdb9-54f5-466b-ae1e-049f26c69468\" (UID: \"14cfcdb9-54f5-466b-ae1e-049f26c69468\") " Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.835958 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "14cfcdb9-54f5-466b-ae1e-049f26c69468" (UID: "14cfcdb9-54f5-466b-ae1e-049f26c69468"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.836116 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "14cfcdb9-54f5-466b-ae1e-049f26c69468" (UID: "14cfcdb9-54f5-466b-ae1e-049f26c69468"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.841811 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-scripts" (OuterVolumeSpecName: "scripts") pod "14cfcdb9-54f5-466b-ae1e-049f26c69468" (UID: "14cfcdb9-54f5-466b-ae1e-049f26c69468"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.850260 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14cfcdb9-54f5-466b-ae1e-049f26c69468-kube-api-access-nr8x8" (OuterVolumeSpecName: "kube-api-access-nr8x8") pod "14cfcdb9-54f5-466b-ae1e-049f26c69468" (UID: "14cfcdb9-54f5-466b-ae1e-049f26c69468"). InnerVolumeSpecName "kube-api-access-nr8x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.869670 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "14cfcdb9-54f5-466b-ae1e-049f26c69468" (UID: "14cfcdb9-54f5-466b-ae1e-049f26c69468"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.938187 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.938431 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/14cfcdb9-54f5-466b-ae1e-049f26c69468-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.938533 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.938629 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nr8x8\" (UniqueName: \"kubernetes.io/projected/14cfcdb9-54f5-466b-ae1e-049f26c69468-kube-api-access-nr8x8\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.938724 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.942723 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14cfcdb9-54f5-466b-ae1e-049f26c69468" (UID: "14cfcdb9-54f5-466b-ae1e-049f26c69468"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:27 crc kubenswrapper[4996]: I0228 09:22:27.942804 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-config-data" (OuterVolumeSpecName: "config-data") pod "14cfcdb9-54f5-466b-ae1e-049f26c69468" (UID: "14cfcdb9-54f5-466b-ae1e-049f26c69468"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.041438 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.041733 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14cfcdb9-54f5-466b-ae1e-049f26c69468-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.691156 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"14cfcdb9-54f5-466b-ae1e-049f26c69468","Type":"ContainerDied","Data":"c1cb375a29daf84f841b93d43f4681b6895f18fd9573933c60be98c67be6a253"} Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.691477 4996 scope.go:117] "RemoveContainer" containerID="87eecc751bd6a85fd53cae97f58a8208ed7512844a56460950967937c1f9d3a3" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.691204 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.714376 4996 scope.go:117] "RemoveContainer" containerID="4102d8ccb9e4d1c35ad877d69d9177f3e97075ccf9d3d6ce8cf828ae1ddf9b58" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.733823 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.738094 4996 scope.go:117] "RemoveContainer" containerID="a89fb8d93fd0ec37bb7389e9a8d2f570deca553e0352f8c6d56e96de306c559c" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.749149 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.758696 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:28 crc kubenswrapper[4996]: E0228 09:22:28.759158 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="ceilometer-central-agent" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759173 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="ceilometer-central-agent" Feb 28 09:22:28 crc kubenswrapper[4996]: E0228 09:22:28.759190 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="sg-core" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759199 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="sg-core" Feb 28 09:22:28 crc kubenswrapper[4996]: E0228 09:22:28.759218 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="proxy-httpd" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759227 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="proxy-httpd" Feb 28 09:22:28 crc kubenswrapper[4996]: E0228 09:22:28.759250 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="ceilometer-notification-agent" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759258 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="ceilometer-notification-agent" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759285 4996 scope.go:117] "RemoveContainer" containerID="9b923ae7265adc672deabe2fe79d1509b4350b7a05b7cb00f64f7941fa9d76bc" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759462 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="ceilometer-notification-agent" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759479 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="ceilometer-central-agent" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759503 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="sg-core" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.759520 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" containerName="proxy-httpd" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.761256 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.764514 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.764722 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.777707 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.856582 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.856629 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88kdt\" (UniqueName: \"kubernetes.io/projected/3f83527f-6adb-44c7-8d28-79411d2f8aa2-kube-api-access-88kdt\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.856651 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.856697 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-scripts\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.856717 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-run-httpd\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.856737 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-log-httpd\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.856764 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-config-data\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.958539 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.958594 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88kdt\" (UniqueName: \"kubernetes.io/projected/3f83527f-6adb-44c7-8d28-79411d2f8aa2-kube-api-access-88kdt\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.958614 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.958655 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-scripts\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.958678 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-run-httpd\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.958699 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-log-httpd\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.958725 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-config-data\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.959949 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-log-httpd\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.960038 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-run-httpd\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.963573 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.963698 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.963853 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-scripts\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.967875 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-config-data\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:28 crc kubenswrapper[4996]: I0228 09:22:28.976772 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88kdt\" (UniqueName: \"kubernetes.io/projected/3f83527f-6adb-44c7-8d28-79411d2f8aa2-kube-api-access-88kdt\") pod \"ceilometer-0\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " pod="openstack/ceilometer-0" Feb 28 09:22:29 crc kubenswrapper[4996]: I0228 09:22:29.042822 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14cfcdb9-54f5-466b-ae1e-049f26c69468" path="/var/lib/kubelet/pods/14cfcdb9-54f5-466b-ae1e-049f26c69468/volumes" Feb 28 09:22:29 crc kubenswrapper[4996]: I0228 09:22:29.090163 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:22:29 crc kubenswrapper[4996]: I0228 09:22:29.537035 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:22:29 crc kubenswrapper[4996]: I0228 09:22:29.702320 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerStarted","Data":"3c16bbff50c16d10d6a525c4a8a2b29c6296848fb9263c6ae1fa85e389b9bcf0"} Feb 28 09:22:30 crc kubenswrapper[4996]: I0228 09:22:30.717237 4996 generic.go:334] "Generic (PLEG): container finished" podID="d97aca1c-945f-4e22-aa03-667cc7345de5" containerID="94475a765872defb9164e91cdd045fdfdefd13e8f3bf40f3ff1152b0cd9115fd" exitCode=0 Feb 28 09:22:30 crc kubenswrapper[4996]: I0228 09:22:30.717853 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" event={"ID":"d97aca1c-945f-4e22-aa03-667cc7345de5","Type":"ContainerDied","Data":"94475a765872defb9164e91cdd045fdfdefd13e8f3bf40f3ff1152b0cd9115fd"} Feb 28 09:22:30 crc kubenswrapper[4996]: I0228 09:22:30.722506 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerStarted","Data":"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9"} Feb 28 09:22:31 crc kubenswrapper[4996]: I0228 09:22:31.731838 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerStarted","Data":"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7"} Feb 28 09:22:31 crc kubenswrapper[4996]: I0228 09:22:31.732335 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerStarted","Data":"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4"} Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.058379 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.119746 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwbrc\" (UniqueName: \"kubernetes.io/projected/d97aca1c-945f-4e22-aa03-667cc7345de5-kube-api-access-bwbrc\") pod \"d97aca1c-945f-4e22-aa03-667cc7345de5\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.119914 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-scripts\") pod \"d97aca1c-945f-4e22-aa03-667cc7345de5\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.120037 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-config-data\") pod \"d97aca1c-945f-4e22-aa03-667cc7345de5\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.120078 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-combined-ca-bundle\") pod \"d97aca1c-945f-4e22-aa03-667cc7345de5\" (UID: \"d97aca1c-945f-4e22-aa03-667cc7345de5\") " Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.124636 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97aca1c-945f-4e22-aa03-667cc7345de5-kube-api-access-bwbrc" (OuterVolumeSpecName: "kube-api-access-bwbrc") pod "d97aca1c-945f-4e22-aa03-667cc7345de5" (UID: "d97aca1c-945f-4e22-aa03-667cc7345de5"). InnerVolumeSpecName "kube-api-access-bwbrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.125839 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-scripts" (OuterVolumeSpecName: "scripts") pod "d97aca1c-945f-4e22-aa03-667cc7345de5" (UID: "d97aca1c-945f-4e22-aa03-667cc7345de5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.143822 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d97aca1c-945f-4e22-aa03-667cc7345de5" (UID: "d97aca1c-945f-4e22-aa03-667cc7345de5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.161682 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-config-data" (OuterVolumeSpecName: "config-data") pod "d97aca1c-945f-4e22-aa03-667cc7345de5" (UID: "d97aca1c-945f-4e22-aa03-667cc7345de5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.222104 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.222337 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.222420 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d97aca1c-945f-4e22-aa03-667cc7345de5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.222536 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwbrc\" (UniqueName: \"kubernetes.io/projected/d97aca1c-945f-4e22-aa03-667cc7345de5-kube-api-access-bwbrc\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.745228 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" event={"ID":"d97aca1c-945f-4e22-aa03-667cc7345de5","Type":"ContainerDied","Data":"da090829f761098c12a4b58b44c1e20da2770befdb4a4aeed39cd4a3062893cc"} Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.745666 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da090829f761098c12a4b58b44c1e20da2770befdb4a4aeed39cd4a3062893cc" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.745326 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-gwpcz" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.952571 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 09:22:32 crc kubenswrapper[4996]: E0228 09:22:32.953175 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97aca1c-945f-4e22-aa03-667cc7345de5" containerName="nova-cell0-conductor-db-sync" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.953190 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97aca1c-945f-4e22-aa03-667cc7345de5" containerName="nova-cell0-conductor-db-sync" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.953407 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97aca1c-945f-4e22-aa03-667cc7345de5" containerName="nova-cell0-conductor-db-sync" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.954150 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.957320 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9g9kx" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.957376 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 28 09:22:32 crc kubenswrapper[4996]: I0228 09:22:32.969991 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.037142 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65ff4ec-036e-4680-8a41-9941e185fc14-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.037233 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6jg\" (UniqueName: \"kubernetes.io/projected/d65ff4ec-036e-4680-8a41-9941e185fc14-kube-api-access-hd6jg\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.037325 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65ff4ec-036e-4680-8a41-9941e185fc14-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.139212 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65ff4ec-036e-4680-8a41-9941e185fc14-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.139295 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6jg\" (UniqueName: \"kubernetes.io/projected/d65ff4ec-036e-4680-8a41-9941e185fc14-kube-api-access-hd6jg\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.139318 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65ff4ec-036e-4680-8a41-9941e185fc14-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.145230 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d65ff4ec-036e-4680-8a41-9941e185fc14-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.147411 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d65ff4ec-036e-4680-8a41-9941e185fc14-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.167286 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6jg\" (UniqueName: \"kubernetes.io/projected/d65ff4ec-036e-4680-8a41-9941e185fc14-kube-api-access-hd6jg\") pod \"nova-cell0-conductor-0\" (UID: \"d65ff4ec-036e-4680-8a41-9941e185fc14\") " pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.283328 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.736795 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.772494 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerStarted","Data":"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680"} Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.774122 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:22:33 crc kubenswrapper[4996]: I0228 09:22:33.804228 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.379820089 podStartE2EDuration="5.804209811s" podCreationTimestamp="2026-02-28 09:22:28 +0000 UTC" firstStartedPulling="2026-02-28 09:22:29.542032573 +0000 UTC m=+1313.232835384" lastFinishedPulling="2026-02-28 09:22:32.966422275 +0000 UTC m=+1316.657225106" observedRunningTime="2026-02-28 09:22:33.790466798 +0000 UTC m=+1317.481269629" watchObservedRunningTime="2026-02-28 09:22:33.804209811 +0000 UTC m=+1317.495012622" Feb 28 09:22:34 crc kubenswrapper[4996]: I0228 09:22:34.784818 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d65ff4ec-036e-4680-8a41-9941e185fc14","Type":"ContainerStarted","Data":"9ee22dc3731cf974ec16076c825ca3b649042673626869af85b29ab8ec653c2b"} Feb 28 09:22:34 crc kubenswrapper[4996]: I0228 09:22:34.785249 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:34 crc kubenswrapper[4996]: I0228 09:22:34.785276 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d65ff4ec-036e-4680-8a41-9941e185fc14","Type":"ContainerStarted","Data":"3ffdde638523a29c8b646aa9da0eb15a46e2fa1e9b12a2811e29d252d9a3b169"} Feb 28 09:22:34 crc kubenswrapper[4996]: I0228 09:22:34.810578 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.8105458800000003 podStartE2EDuration="2.81054588s" podCreationTimestamp="2026-02-28 09:22:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:22:34.797652998 +0000 UTC m=+1318.488455839" watchObservedRunningTime="2026-02-28 09:22:34.81054588 +0000 UTC m=+1318.501348691" Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.331335 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.850184 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-kchvm"] Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.851489 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.853671 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.854340 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.867520 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kchvm"] Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.950175 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-config-data\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.950236 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.950341 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-scripts\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:38 crc kubenswrapper[4996]: I0228 09:22:38.950570 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clb2n\" (UniqueName: \"kubernetes.io/projected/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-kube-api-access-clb2n\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.024514 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.025834 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.032511 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.053468 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clb2n\" (UniqueName: \"kubernetes.io/projected/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-kube-api-access-clb2n\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.053556 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-config-data\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.053582 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.053632 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-scripts\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.065081 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-config-data\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.066985 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.071792 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-scripts\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.071900 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.106562 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clb2n\" (UniqueName: \"kubernetes.io/projected/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-kube-api-access-clb2n\") pod \"nova-cell0-cell-mapping-kchvm\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.157028 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tqg\" (UniqueName: \"kubernetes.io/projected/5083d983-d38f-4f2d-88cd-f3246e0b8e82-kube-api-access-l5tqg\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.157089 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.157123 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-config-data\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.171593 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.185951 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.187401 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.190906 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.211699 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.258232 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tqg\" (UniqueName: \"kubernetes.io/projected/5083d983-d38f-4f2d-88cd-f3246e0b8e82-kube-api-access-l5tqg\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.258297 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.258327 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.258346 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-config-data\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.258379 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-config-data\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.258437 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxswb\" (UniqueName: \"kubernetes.io/projected/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-kube-api-access-sxswb\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.258460 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-logs\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.269760 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.292954 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-config-data\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.306398 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tqg\" (UniqueName: \"kubernetes.io/projected/5083d983-d38f-4f2d-88cd-f3246e0b8e82-kube-api-access-l5tqg\") pod \"nova-scheduler-0\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.351352 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-pm8t6"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.353343 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.359620 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.359668 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-config-data\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.359737 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxswb\" (UniqueName: \"kubernetes.io/projected/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-kube-api-access-sxswb\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.359761 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-logs\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.360535 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-logs\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.365613 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-config-data\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.374912 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.382230 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.383841 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.389501 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.417991 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxswb\" (UniqueName: \"kubernetes.io/projected/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-kube-api-access-sxswb\") pod \"nova-metadata-0\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.423163 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.452647 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-pm8t6"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.462490 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9t6v\" (UniqueName: \"kubernetes.io/projected/b391b287-fc4a-437e-b157-db5e86661249-kube-api-access-l9t6v\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.462566 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.462627 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.462677 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btdrb\" (UniqueName: \"kubernetes.io/projected/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-kube-api-access-btdrb\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.462702 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-config\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.462735 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-dns-svc\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.462783 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.462807 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.465590 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.505083 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.518379 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.535801 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.539233 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.578936 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btdrb\" (UniqueName: \"kubernetes.io/projected/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-kube-api-access-btdrb\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.580337 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-config\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.580526 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-dns-svc\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.580933 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.581075 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.581720 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.581975 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-config\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.582236 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-config-data\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.586968 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.582563 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9t6v\" (UniqueName: \"kubernetes.io/projected/b391b287-fc4a-437e-b157-db5e86661249-kube-api-access-l9t6v\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.588114 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2eaaf97-646d-40f7-b1db-5d2110170b65-logs\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.588440 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.588874 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r45kh\" (UniqueName: \"kubernetes.io/projected/f2eaaf97-646d-40f7-b1db-5d2110170b65-kube-api-access-r45kh\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.589213 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.594165 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-dns-svc\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.596958 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.615132 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.638570 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.685908 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btdrb\" (UniqueName: \"kubernetes.io/projected/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-kube-api-access-btdrb\") pod \"nova-cell1-novncproxy-0\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.699136 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.699217 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-config-data\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.699258 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2eaaf97-646d-40f7-b1db-5d2110170b65-logs\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.699348 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r45kh\" (UniqueName: \"kubernetes.io/projected/f2eaaf97-646d-40f7-b1db-5d2110170b65-kube-api-access-r45kh\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.703606 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2eaaf97-646d-40f7-b1db-5d2110170b65-logs\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.704837 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-config-data\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.704903 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.734565 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r45kh\" (UniqueName: \"kubernetes.io/projected/f2eaaf97-646d-40f7-b1db-5d2110170b65-kube-api-access-r45kh\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.735278 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " pod="openstack/nova-api-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.746890 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9t6v\" (UniqueName: \"kubernetes.io/projected/b391b287-fc4a-437e-b157-db5e86661249-kube-api-access-l9t6v\") pod \"dnsmasq-dns-566b5b7845-pm8t6\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.771791 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:39 crc kubenswrapper[4996]: I0228 09:22:39.921241 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.011254 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-kchvm"] Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.018231 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.203918 4996 scope.go:117] "RemoveContainer" containerID="73f952884d1c205f42889a765ea2798ed738a56360725fd5ca62f207067646e0" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.223048 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.252343 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.425402 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.451895 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pf8pr"] Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.453494 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.454909 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.457999 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.478048 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pf8pr"] Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.515949 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrd2t\" (UniqueName: \"kubernetes.io/projected/e55f51fe-d8cc-47d5-9b5d-29877c65069a-kube-api-access-wrd2t\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.516203 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-scripts\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.516240 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-config-data\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.516260 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.517213 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.618493 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrd2t\" (UniqueName: \"kubernetes.io/projected/e55f51fe-d8cc-47d5-9b5d-29877c65069a-kube-api-access-wrd2t\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.618696 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-scripts\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.618737 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-config-data\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.618758 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.626850 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-scripts\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.626975 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.627678 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-config-data\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.640476 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrd2t\" (UniqueName: \"kubernetes.io/projected/e55f51fe-d8cc-47d5-9b5d-29877c65069a-kube-api-access-wrd2t\") pod \"nova-cell1-conductor-db-sync-pf8pr\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.652039 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-pm8t6"] Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.771920 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.865857 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5083d983-d38f-4f2d-88cd-f3246e0b8e82","Type":"ContainerStarted","Data":"8f99d6b36c4a633811a1c7d58e286842d2e9b3b71605da8139ee70e19b07fb1a"} Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.867583 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kchvm" event={"ID":"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d","Type":"ContainerStarted","Data":"9b768e5c3391e0ec2c1b8b1272e8e8ba9cf6b9c4c4eb17a944228747404439b6"} Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.867617 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kchvm" event={"ID":"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d","Type":"ContainerStarted","Data":"db5cfc440ae31607a61c704bc5f8c52f0141d207901c7e02c4b33373cd9da8c4"} Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.870981 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fe693ee9-0d2c-4b47-afd8-ef8317915fcc","Type":"ContainerStarted","Data":"23e10201ab43e47b4410adda6c4f17195ac31a601b0fa8822930550a10d86e8f"} Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.875049 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" event={"ID":"b391b287-fc4a-437e-b157-db5e86661249","Type":"ContainerStarted","Data":"d8c974342105b02788494b71bd04c62717fbc0d54f1da8fae56707f1240c5c02"} Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.880308 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2eaaf97-646d-40f7-b1db-5d2110170b65","Type":"ContainerStarted","Data":"8fdb0bb3ac47c5912fc4a4c9f831a23fe86fe9e955792d17186f7bb5f718ab93"} Feb 28 09:22:40 crc kubenswrapper[4996]: I0228 09:22:40.881404 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e62fba78-9bdd-41bf-8cb9-7e21d68e963c","Type":"ContainerStarted","Data":"fd72b64ebef00416b6cfa04ecdc23d325596253af1c0a2866e9c1e22653fd2a1"} Feb 28 09:22:41 crc kubenswrapper[4996]: W0228 09:22:41.255549 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode55f51fe_d8cc_47d5_9b5d_29877c65069a.slice/crio-6a1188be87c9066f4de3a6c2eb0b218d5c0636ad8ed1845a5967da507c4a30c0 WatchSource:0}: Error finding container 6a1188be87c9066f4de3a6c2eb0b218d5c0636ad8ed1845a5967da507c4a30c0: Status 404 returned error can't find the container with id 6a1188be87c9066f4de3a6c2eb0b218d5c0636ad8ed1845a5967da507c4a30c0 Feb 28 09:22:41 crc kubenswrapper[4996]: I0228 09:22:41.254995 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-kchvm" podStartSLOduration=3.254977652 podStartE2EDuration="3.254977652s" podCreationTimestamp="2026-02-28 09:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:22:40.903258814 +0000 UTC m=+1324.594061625" watchObservedRunningTime="2026-02-28 09:22:41.254977652 +0000 UTC m=+1324.945780463" Feb 28 09:22:41 crc kubenswrapper[4996]: I0228 09:22:41.259613 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pf8pr"] Feb 28 09:22:41 crc kubenswrapper[4996]: I0228 09:22:41.896026 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" event={"ID":"e55f51fe-d8cc-47d5-9b5d-29877c65069a","Type":"ContainerStarted","Data":"f5e5764fc2eee7b2ce75f276442d69f89e72ad38ff1431413ae64a2494e44e5a"} Feb 28 09:22:41 crc kubenswrapper[4996]: I0228 09:22:41.896762 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" event={"ID":"e55f51fe-d8cc-47d5-9b5d-29877c65069a","Type":"ContainerStarted","Data":"6a1188be87c9066f4de3a6c2eb0b218d5c0636ad8ed1845a5967da507c4a30c0"} Feb 28 09:22:41 crc kubenswrapper[4996]: I0228 09:22:41.899461 4996 generic.go:334] "Generic (PLEG): container finished" podID="b391b287-fc4a-437e-b157-db5e86661249" containerID="4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e" exitCode=0 Feb 28 09:22:41 crc kubenswrapper[4996]: I0228 09:22:41.902613 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" event={"ID":"b391b287-fc4a-437e-b157-db5e86661249","Type":"ContainerDied","Data":"4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e"} Feb 28 09:22:41 crc kubenswrapper[4996]: I0228 09:22:41.917341 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" podStartSLOduration=1.9173156900000001 podStartE2EDuration="1.91731569s" podCreationTimestamp="2026-02-28 09:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:22:41.913204801 +0000 UTC m=+1325.604007612" watchObservedRunningTime="2026-02-28 09:22:41.91731569 +0000 UTC m=+1325.608118511" Feb 28 09:22:42 crc kubenswrapper[4996]: I0228 09:22:42.909589 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" event={"ID":"b391b287-fc4a-437e-b157-db5e86661249","Type":"ContainerStarted","Data":"646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea"} Feb 28 09:22:42 crc kubenswrapper[4996]: I0228 09:22:42.933160 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" podStartSLOduration=3.933139428 podStartE2EDuration="3.933139428s" podCreationTimestamp="2026-02-28 09:22:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:22:42.931037707 +0000 UTC m=+1326.621840548" watchObservedRunningTime="2026-02-28 09:22:42.933139428 +0000 UTC m=+1326.623942239" Feb 28 09:22:43 crc kubenswrapper[4996]: I0228 09:22:43.156760 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:22:43 crc kubenswrapper[4996]: I0228 09:22:43.167788 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:22:43 crc kubenswrapper[4996]: I0228 09:22:43.919946 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.931019 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2eaaf97-646d-40f7-b1db-5d2110170b65","Type":"ContainerStarted","Data":"b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010"} Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.931601 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2eaaf97-646d-40f7-b1db-5d2110170b65","Type":"ContainerStarted","Data":"fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c"} Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.933027 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e62fba78-9bdd-41bf-8cb9-7e21d68e963c","Type":"ContainerStarted","Data":"b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942"} Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.933127 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e62fba78-9bdd-41bf-8cb9-7e21d68e963c" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942" gracePeriod=30 Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.935657 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fe693ee9-0d2c-4b47-afd8-ef8317915fcc","Type":"ContainerStarted","Data":"c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44"} Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.935708 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fe693ee9-0d2c-4b47-afd8-ef8317915fcc","Type":"ContainerStarted","Data":"26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28"} Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.935847 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerName="nova-metadata-log" containerID="cri-o://26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28" gracePeriod=30 Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.935982 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerName="nova-metadata-metadata" containerID="cri-o://c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44" gracePeriod=30 Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.939786 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5083d983-d38f-4f2d-88cd-f3246e0b8e82","Type":"ContainerStarted","Data":"b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2"} Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.963907 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.14132545 podStartE2EDuration="5.963882203s" podCreationTimestamp="2026-02-28 09:22:39 +0000 UTC" firstStartedPulling="2026-02-28 09:22:40.534793562 +0000 UTC m=+1324.225596373" lastFinishedPulling="2026-02-28 09:22:44.357350315 +0000 UTC m=+1328.048153126" observedRunningTime="2026-02-28 09:22:44.94890439 +0000 UTC m=+1328.639707201" watchObservedRunningTime="2026-02-28 09:22:44.963882203 +0000 UTC m=+1328.654685024" Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.974108 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.08517861 podStartE2EDuration="5.974089719s" podCreationTimestamp="2026-02-28 09:22:39 +0000 UTC" firstStartedPulling="2026-02-28 09:22:40.4529333 +0000 UTC m=+1324.143736111" lastFinishedPulling="2026-02-28 09:22:44.341844409 +0000 UTC m=+1328.032647220" observedRunningTime="2026-02-28 09:22:44.964043656 +0000 UTC m=+1328.654846477" watchObservedRunningTime="2026-02-28 09:22:44.974089719 +0000 UTC m=+1328.664892530" Feb 28 09:22:44 crc kubenswrapper[4996]: I0228 09:22:44.989547 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.877433699 podStartE2EDuration="5.989524903s" podCreationTimestamp="2026-02-28 09:22:39 +0000 UTC" firstStartedPulling="2026-02-28 09:22:40.228793993 +0000 UTC m=+1323.919596804" lastFinishedPulling="2026-02-28 09:22:44.340885187 +0000 UTC m=+1328.031688008" observedRunningTime="2026-02-28 09:22:44.983181069 +0000 UTC m=+1328.673983870" watchObservedRunningTime="2026-02-28 09:22:44.989524903 +0000 UTC m=+1328.680327714" Feb 28 09:22:45 crc kubenswrapper[4996]: I0228 09:22:45.005736 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.934607814 podStartE2EDuration="7.005720136s" podCreationTimestamp="2026-02-28 09:22:38 +0000 UTC" firstStartedPulling="2026-02-28 09:22:40.268725909 +0000 UTC m=+1323.959528720" lastFinishedPulling="2026-02-28 09:22:44.339838231 +0000 UTC m=+1328.030641042" observedRunningTime="2026-02-28 09:22:44.999362121 +0000 UTC m=+1328.690164952" watchObservedRunningTime="2026-02-28 09:22:45.005720136 +0000 UTC m=+1328.696522947" Feb 28 09:22:45 crc kubenswrapper[4996]: I0228 09:22:45.970167 4996 generic.go:334] "Generic (PLEG): container finished" podID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerID="26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28" exitCode=143 Feb 28 09:22:45 crc kubenswrapper[4996]: I0228 09:22:45.971083 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fe693ee9-0d2c-4b47-afd8-ef8317915fcc","Type":"ContainerDied","Data":"26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28"} Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.011950 4996 generic.go:334] "Generic (PLEG): container finished" podID="7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" containerID="9b768e5c3391e0ec2c1b8b1272e8e8ba9cf6b9c4c4eb17a944228747404439b6" exitCode=0 Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.012501 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kchvm" event={"ID":"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d","Type":"ContainerDied","Data":"9b768e5c3391e0ec2c1b8b1272e8e8ba9cf6b9c4c4eb17a944228747404439b6"} Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.466787 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.466892 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.493293 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.540595 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.540645 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.773418 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.922368 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 09:22:49 crc kubenswrapper[4996]: I0228 09:22:49.922424 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.019255 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.095727 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-2kbxj"] Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.095957 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" podUID="36aae0d9-72c5-4af8-9455-950962baeb28" containerName="dnsmasq-dns" containerID="cri-o://158595799c0ca17e54eb33f536b1bf532e3b707aa9ed6395879ad074acf10061" gracePeriod=10 Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.103386 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.569476 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.629518 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clb2n\" (UniqueName: \"kubernetes.io/projected/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-kube-api-access-clb2n\") pod \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.629630 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-scripts\") pod \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.629696 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-config-data\") pod \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.629825 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-combined-ca-bundle\") pod \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\" (UID: \"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d\") " Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.637663 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-scripts" (OuterVolumeSpecName: "scripts") pod "7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" (UID: "7dc51c80-ff0d-4bec-80c7-bd45d5e4970d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.637745 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-kube-api-access-clb2n" (OuterVolumeSpecName: "kube-api-access-clb2n") pod "7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" (UID: "7dc51c80-ff0d-4bec-80c7-bd45d5e4970d"). InnerVolumeSpecName "kube-api-access-clb2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.677328 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" (UID: "7dc51c80-ff0d-4bec-80c7-bd45d5e4970d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.699206 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-config-data" (OuterVolumeSpecName: "config-data") pod "7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" (UID: "7dc51c80-ff0d-4bec-80c7-bd45d5e4970d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.731284 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clb2n\" (UniqueName: \"kubernetes.io/projected/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-kube-api-access-clb2n\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.731562 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.731631 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:50 crc kubenswrapper[4996]: I0228 09:22:50.731696 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.004421 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.004604 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.064639 4996 generic.go:334] "Generic (PLEG): container finished" podID="36aae0d9-72c5-4af8-9455-950962baeb28" containerID="158595799c0ca17e54eb33f536b1bf532e3b707aa9ed6395879ad074acf10061" exitCode=0 Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.065133 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" event={"ID":"36aae0d9-72c5-4af8-9455-950962baeb28","Type":"ContainerDied","Data":"158595799c0ca17e54eb33f536b1bf532e3b707aa9ed6395879ad074acf10061"} Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.085188 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-kchvm" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.085745 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-kchvm" event={"ID":"7dc51c80-ff0d-4bec-80c7-bd45d5e4970d","Type":"ContainerDied","Data":"db5cfc440ae31607a61c704bc5f8c52f0141d207901c7e02c4b33373cd9da8c4"} Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.085784 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db5cfc440ae31607a61c704bc5f8c52f0141d207901c7e02c4b33373cd9da8c4" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.230677 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.230912 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-log" containerID="cri-o://fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c" gracePeriod=30 Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.231611 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-api" containerID="cri-o://b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010" gracePeriod=30 Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.357039 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.400304 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.449354 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-config\") pod \"36aae0d9-72c5-4af8-9455-950962baeb28\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.449816 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-sb\") pod \"36aae0d9-72c5-4af8-9455-950962baeb28\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.450110 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-dns-svc\") pod \"36aae0d9-72c5-4af8-9455-950962baeb28\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.450320 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-nb\") pod \"36aae0d9-72c5-4af8-9455-950962baeb28\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.450444 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4fvh\" (UniqueName: \"kubernetes.io/projected/36aae0d9-72c5-4af8-9455-950962baeb28-kube-api-access-m4fvh\") pod \"36aae0d9-72c5-4af8-9455-950962baeb28\" (UID: \"36aae0d9-72c5-4af8-9455-950962baeb28\") " Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.468450 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36aae0d9-72c5-4af8-9455-950962baeb28-kube-api-access-m4fvh" (OuterVolumeSpecName: "kube-api-access-m4fvh") pod "36aae0d9-72c5-4af8-9455-950962baeb28" (UID: "36aae0d9-72c5-4af8-9455-950962baeb28"). InnerVolumeSpecName "kube-api-access-m4fvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.531115 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36aae0d9-72c5-4af8-9455-950962baeb28" (UID: "36aae0d9-72c5-4af8-9455-950962baeb28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.531696 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-config" (OuterVolumeSpecName: "config") pod "36aae0d9-72c5-4af8-9455-950962baeb28" (UID: "36aae0d9-72c5-4af8-9455-950962baeb28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.535018 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36aae0d9-72c5-4af8-9455-950962baeb28" (UID: "36aae0d9-72c5-4af8-9455-950962baeb28"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.541813 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36aae0d9-72c5-4af8-9455-950962baeb28" (UID: "36aae0d9-72c5-4af8-9455-950962baeb28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.553214 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.553263 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.553277 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4fvh\" (UniqueName: \"kubernetes.io/projected/36aae0d9-72c5-4af8-9455-950962baeb28-kube-api-access-m4fvh\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.553288 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:51 crc kubenswrapper[4996]: I0228 09:22:51.553301 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36aae0d9-72c5-4af8-9455-950962baeb28-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.093873 4996 generic.go:334] "Generic (PLEG): container finished" podID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerID="fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c" exitCode=143 Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.093929 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2eaaf97-646d-40f7-b1db-5d2110170b65","Type":"ContainerDied","Data":"fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c"} Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.096159 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" event={"ID":"36aae0d9-72c5-4af8-9455-950962baeb28","Type":"ContainerDied","Data":"64ef11a14a7ff8cefce3cebba908b254626a1b1bce20bba9b276db3393d9b419"} Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.096211 4996 scope.go:117] "RemoveContainer" containerID="158595799c0ca17e54eb33f536b1bf532e3b707aa9ed6395879ad074acf10061" Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.096229 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5083d983-d38f-4f2d-88cd-f3246e0b8e82" containerName="nova-scheduler-scheduler" containerID="cri-o://b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2" gracePeriod=30 Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.096301 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-2kbxj" Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.122174 4996 scope.go:117] "RemoveContainer" containerID="fe4464f0597bc1cc7aed7d0b2a3320643c9d2f0e805e34f9b666fecd4e233eae" Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.130250 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-2kbxj"] Feb 28 09:22:52 crc kubenswrapper[4996]: I0228 09:22:52.137993 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-2kbxj"] Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.046462 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36aae0d9-72c5-4af8-9455-950962baeb28" path="/var/lib/kubelet/pods/36aae0d9-72c5-4af8-9455-950962baeb28/volumes" Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.107229 4996 generic.go:334] "Generic (PLEG): container finished" podID="e55f51fe-d8cc-47d5-9b5d-29877c65069a" containerID="f5e5764fc2eee7b2ce75f276442d69f89e72ad38ff1431413ae64a2494e44e5a" exitCode=0 Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.107311 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" event={"ID":"e55f51fe-d8cc-47d5-9b5d-29877c65069a","Type":"ContainerDied","Data":"f5e5764fc2eee7b2ce75f276442d69f89e72ad38ff1431413ae64a2494e44e5a"} Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.703176 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.902366 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-config-data\") pod \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.902421 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5tqg\" (UniqueName: \"kubernetes.io/projected/5083d983-d38f-4f2d-88cd-f3246e0b8e82-kube-api-access-l5tqg\") pod \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.902483 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-combined-ca-bundle\") pod \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\" (UID: \"5083d983-d38f-4f2d-88cd-f3246e0b8e82\") " Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.911988 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5083d983-d38f-4f2d-88cd-f3246e0b8e82-kube-api-access-l5tqg" (OuterVolumeSpecName: "kube-api-access-l5tqg") pod "5083d983-d38f-4f2d-88cd-f3246e0b8e82" (UID: "5083d983-d38f-4f2d-88cd-f3246e0b8e82"). InnerVolumeSpecName "kube-api-access-l5tqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.938360 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-config-data" (OuterVolumeSpecName: "config-data") pod "5083d983-d38f-4f2d-88cd-f3246e0b8e82" (UID: "5083d983-d38f-4f2d-88cd-f3246e0b8e82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:53 crc kubenswrapper[4996]: I0228 09:22:53.967676 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5083d983-d38f-4f2d-88cd-f3246e0b8e82" (UID: "5083d983-d38f-4f2d-88cd-f3246e0b8e82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.015229 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.015261 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5tqg\" (UniqueName: \"kubernetes.io/projected/5083d983-d38f-4f2d-88cd-f3246e0b8e82-kube-api-access-l5tqg\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.015272 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5083d983-d38f-4f2d-88cd-f3246e0b8e82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.118989 4996 generic.go:334] "Generic (PLEG): container finished" podID="5083d983-d38f-4f2d-88cd-f3246e0b8e82" containerID="b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2" exitCode=0 Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.119071 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5083d983-d38f-4f2d-88cd-f3246e0b8e82","Type":"ContainerDied","Data":"b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2"} Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.119110 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5083d983-d38f-4f2d-88cd-f3246e0b8e82","Type":"ContainerDied","Data":"8f99d6b36c4a633811a1c7d58e286842d2e9b3b71605da8139ee70e19b07fb1a"} Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.119126 4996 scope.go:117] "RemoveContainer" containerID="b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.119197 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.161157 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.163220 4996 scope.go:117] "RemoveContainer" containerID="b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2" Feb 28 09:22:54 crc kubenswrapper[4996]: E0228 09:22:54.168432 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2\": container with ID starting with b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2 not found: ID does not exist" containerID="b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.168486 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2"} err="failed to get container status \"b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2\": rpc error: code = NotFound desc = could not find container \"b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2\": container with ID starting with b318d50b1dc8297dc325331442b14152441b56c1f4fe5230b0f66b68810079e2 not found: ID does not exist" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.182335 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.190854 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:54 crc kubenswrapper[4996]: E0228 09:22:54.191287 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" containerName="nova-manage" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.191311 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" containerName="nova-manage" Feb 28 09:22:54 crc kubenswrapper[4996]: E0228 09:22:54.191333 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36aae0d9-72c5-4af8-9455-950962baeb28" containerName="init" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.191342 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="36aae0d9-72c5-4af8-9455-950962baeb28" containerName="init" Feb 28 09:22:54 crc kubenswrapper[4996]: E0228 09:22:54.191371 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5083d983-d38f-4f2d-88cd-f3246e0b8e82" containerName="nova-scheduler-scheduler" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.191380 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5083d983-d38f-4f2d-88cd-f3246e0b8e82" containerName="nova-scheduler-scheduler" Feb 28 09:22:54 crc kubenswrapper[4996]: E0228 09:22:54.191397 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36aae0d9-72c5-4af8-9455-950962baeb28" containerName="dnsmasq-dns" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.191407 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="36aae0d9-72c5-4af8-9455-950962baeb28" containerName="dnsmasq-dns" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.191613 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" containerName="nova-manage" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.191630 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="36aae0d9-72c5-4af8-9455-950962baeb28" containerName="dnsmasq-dns" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.191644 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5083d983-d38f-4f2d-88cd-f3246e0b8e82" containerName="nova-scheduler-scheduler" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.192352 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.194774 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.199429 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.217905 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-config-data\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.217987 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4xjs\" (UniqueName: \"kubernetes.io/projected/b49a28a3-c5c5-464f-9438-d1756138dfe1-kube-api-access-t4xjs\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.218282 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.325106 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-config-data\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.325472 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4xjs\" (UniqueName: \"kubernetes.io/projected/b49a28a3-c5c5-464f-9438-d1756138dfe1-kube-api-access-t4xjs\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.325537 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.329972 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.338393 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-config-data\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.352359 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4xjs\" (UniqueName: \"kubernetes.io/projected/b49a28a3-c5c5-464f-9438-d1756138dfe1-kube-api-access-t4xjs\") pod \"nova-scheduler-0\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.484360 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.512513 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.550836 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrd2t\" (UniqueName: \"kubernetes.io/projected/e55f51fe-d8cc-47d5-9b5d-29877c65069a-kube-api-access-wrd2t\") pod \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.551053 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-config-data\") pod \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.551112 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-scripts\") pod \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.551176 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-combined-ca-bundle\") pod \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\" (UID: \"e55f51fe-d8cc-47d5-9b5d-29877c65069a\") " Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.564430 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55f51fe-d8cc-47d5-9b5d-29877c65069a-kube-api-access-wrd2t" (OuterVolumeSpecName: "kube-api-access-wrd2t") pod "e55f51fe-d8cc-47d5-9b5d-29877c65069a" (UID: "e55f51fe-d8cc-47d5-9b5d-29877c65069a"). InnerVolumeSpecName "kube-api-access-wrd2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.567183 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-scripts" (OuterVolumeSpecName: "scripts") pod "e55f51fe-d8cc-47d5-9b5d-29877c65069a" (UID: "e55f51fe-d8cc-47d5-9b5d-29877c65069a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.594576 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-config-data" (OuterVolumeSpecName: "config-data") pod "e55f51fe-d8cc-47d5-9b5d-29877c65069a" (UID: "e55f51fe-d8cc-47d5-9b5d-29877c65069a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.600803 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e55f51fe-d8cc-47d5-9b5d-29877c65069a" (UID: "e55f51fe-d8cc-47d5-9b5d-29877c65069a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.653423 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.653455 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.653466 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e55f51fe-d8cc-47d5-9b5d-29877c65069a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:54 crc kubenswrapper[4996]: I0228 09:22:54.653479 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrd2t\" (UniqueName: \"kubernetes.io/projected/e55f51fe-d8cc-47d5-9b5d-29877c65069a-kube-api-access-wrd2t\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.016396 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.043953 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5083d983-d38f-4f2d-88cd-f3246e0b8e82" path="/var/lib/kubelet/pods/5083d983-d38f-4f2d-88cd-f3246e0b8e82/volumes" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.132770 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b49a28a3-c5c5-464f-9438-d1756138dfe1","Type":"ContainerStarted","Data":"9240467e1a4a05f402be5ed0e6e3b08e266fb5228a9bcf81c0c983f658f5a921"} Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.137411 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" event={"ID":"e55f51fe-d8cc-47d5-9b5d-29877c65069a","Type":"ContainerDied","Data":"6a1188be87c9066f4de3a6c2eb0b218d5c0636ad8ed1845a5967da507c4a30c0"} Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.137560 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a1188be87c9066f4de3a6c2eb0b218d5c0636ad8ed1845a5967da507c4a30c0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.137456 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-pf8pr" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.203531 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 09:22:55 crc kubenswrapper[4996]: E0228 09:22:55.203966 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55f51fe-d8cc-47d5-9b5d-29877c65069a" containerName="nova-cell1-conductor-db-sync" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.203991 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55f51fe-d8cc-47d5-9b5d-29877c65069a" containerName="nova-cell1-conductor-db-sync" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.204313 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55f51fe-d8cc-47d5-9b5d-29877c65069a" containerName="nova-cell1-conductor-db-sync" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.205083 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.207087 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.218419 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.264276 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz2ms\" (UniqueName: \"kubernetes.io/projected/09bc4f70-3953-4e3d-a6b0-60905a719e37-kube-api-access-bz2ms\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.264597 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bc4f70-3953-4e3d-a6b0-60905a719e37-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.264733 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bc4f70-3953-4e3d-a6b0-60905a719e37-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.368551 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz2ms\" (UniqueName: \"kubernetes.io/projected/09bc4f70-3953-4e3d-a6b0-60905a719e37-kube-api-access-bz2ms\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.368780 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bc4f70-3953-4e3d-a6b0-60905a719e37-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.368903 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bc4f70-3953-4e3d-a6b0-60905a719e37-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.373941 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bc4f70-3953-4e3d-a6b0-60905a719e37-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.388285 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bc4f70-3953-4e3d-a6b0-60905a719e37-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.398686 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz2ms\" (UniqueName: \"kubernetes.io/projected/09bc4f70-3953-4e3d-a6b0-60905a719e37-kube-api-access-bz2ms\") pod \"nova-cell1-conductor-0\" (UID: \"09bc4f70-3953-4e3d-a6b0-60905a719e37\") " pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:55 crc kubenswrapper[4996]: I0228 09:22:55.540474 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:56 crc kubenswrapper[4996]: I0228 09:22:56.048501 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 09:22:56 crc kubenswrapper[4996]: I0228 09:22:56.149342 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"09bc4f70-3953-4e3d-a6b0-60905a719e37","Type":"ContainerStarted","Data":"534f186b18a49b301ee9080309eb469a57792733cf4045753260bdbe0f6a6299"} Feb 28 09:22:56 crc kubenswrapper[4996]: I0228 09:22:56.155890 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b49a28a3-c5c5-464f-9438-d1756138dfe1","Type":"ContainerStarted","Data":"44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c"} Feb 28 09:22:56 crc kubenswrapper[4996]: I0228 09:22:56.200275 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.20024855 podStartE2EDuration="2.20024855s" podCreationTimestamp="2026-02-28 09:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:22:56.196500169 +0000 UTC m=+1339.887302980" watchObservedRunningTime="2026-02-28 09:22:56.20024855 +0000 UTC m=+1339.891051381" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.188752 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.189055 4996 generic.go:334] "Generic (PLEG): container finished" podID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerID="b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010" exitCode=0 Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.189126 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2eaaf97-646d-40f7-b1db-5d2110170b65","Type":"ContainerDied","Data":"b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010"} Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.189157 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f2eaaf97-646d-40f7-b1db-5d2110170b65","Type":"ContainerDied","Data":"8fdb0bb3ac47c5912fc4a4c9f831a23fe86fe9e955792d17186f7bb5f718ab93"} Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.189177 4996 scope.go:117] "RemoveContainer" containerID="b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.223429 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"09bc4f70-3953-4e3d-a6b0-60905a719e37","Type":"ContainerStarted","Data":"7c2f7100bd60e3552fdb9f6d5f556294b4951fa91a93a2f576c1c25636ca59b2"} Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.223475 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.239473 4996 scope.go:117] "RemoveContainer" containerID="fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.268349 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.268331724 podStartE2EDuration="2.268331724s" podCreationTimestamp="2026-02-28 09:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:22:57.26568359 +0000 UTC m=+1340.956486401" watchObservedRunningTime="2026-02-28 09:22:57.268331724 +0000 UTC m=+1340.959134535" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.301915 4996 scope.go:117] "RemoveContainer" containerID="b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010" Feb 28 09:22:57 crc kubenswrapper[4996]: E0228 09:22:57.302398 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010\": container with ID starting with b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010 not found: ID does not exist" containerID="b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.302424 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010"} err="failed to get container status \"b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010\": rpc error: code = NotFound desc = could not find container \"b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010\": container with ID starting with b667bb6a330f9af4ca6aa75a1854c2fa79d8320bc0aa3884dfe7aa26d5b73010 not found: ID does not exist" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.302442 4996 scope.go:117] "RemoveContainer" containerID="fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c" Feb 28 09:22:57 crc kubenswrapper[4996]: E0228 09:22:57.302686 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c\": container with ID starting with fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c not found: ID does not exist" containerID="fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.302706 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c"} err="failed to get container status \"fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c\": rpc error: code = NotFound desc = could not find container \"fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c\": container with ID starting with fa67d829d984fabdd836e9502e9083f8c49d634bee32b025d2281f8c37dbf67c not found: ID does not exist" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.328235 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-combined-ca-bundle\") pod \"f2eaaf97-646d-40f7-b1db-5d2110170b65\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.328362 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2eaaf97-646d-40f7-b1db-5d2110170b65-logs\") pod \"f2eaaf97-646d-40f7-b1db-5d2110170b65\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.328391 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-config-data\") pod \"f2eaaf97-646d-40f7-b1db-5d2110170b65\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.328461 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r45kh\" (UniqueName: \"kubernetes.io/projected/f2eaaf97-646d-40f7-b1db-5d2110170b65-kube-api-access-r45kh\") pod \"f2eaaf97-646d-40f7-b1db-5d2110170b65\" (UID: \"f2eaaf97-646d-40f7-b1db-5d2110170b65\") " Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.330442 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2eaaf97-646d-40f7-b1db-5d2110170b65-logs" (OuterVolumeSpecName: "logs") pod "f2eaaf97-646d-40f7-b1db-5d2110170b65" (UID: "f2eaaf97-646d-40f7-b1db-5d2110170b65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.333266 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2eaaf97-646d-40f7-b1db-5d2110170b65-kube-api-access-r45kh" (OuterVolumeSpecName: "kube-api-access-r45kh") pod "f2eaaf97-646d-40f7-b1db-5d2110170b65" (UID: "f2eaaf97-646d-40f7-b1db-5d2110170b65"). InnerVolumeSpecName "kube-api-access-r45kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.350745 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2eaaf97-646d-40f7-b1db-5d2110170b65" (UID: "f2eaaf97-646d-40f7-b1db-5d2110170b65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.381692 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-config-data" (OuterVolumeSpecName: "config-data") pod "f2eaaf97-646d-40f7-b1db-5d2110170b65" (UID: "f2eaaf97-646d-40f7-b1db-5d2110170b65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.430967 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2eaaf97-646d-40f7-b1db-5d2110170b65-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.431019 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.431031 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r45kh\" (UniqueName: \"kubernetes.io/projected/f2eaaf97-646d-40f7-b1db-5d2110170b65-kube-api-access-r45kh\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:57 crc kubenswrapper[4996]: I0228 09:22:57.431041 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eaaf97-646d-40f7-b1db-5d2110170b65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.239677 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.284416 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.295459 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.309901 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:58 crc kubenswrapper[4996]: E0228 09:22:58.310318 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-log" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.310338 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-log" Feb 28 09:22:58 crc kubenswrapper[4996]: E0228 09:22:58.310374 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-api" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.310381 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-api" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.310519 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-api" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.310538 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" containerName="nova-api-log" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.311371 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.322099 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.333759 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.355794 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.355886 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmw2b\" (UniqueName: \"kubernetes.io/projected/c1deeab9-d70d-4591-a15a-1367cae92f3d-kube-api-access-qmw2b\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.355961 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1deeab9-d70d-4591-a15a-1367cae92f3d-logs\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.355989 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-config-data\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.457944 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.458039 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmw2b\" (UniqueName: \"kubernetes.io/projected/c1deeab9-d70d-4591-a15a-1367cae92f3d-kube-api-access-qmw2b\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.458081 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1deeab9-d70d-4591-a15a-1367cae92f3d-logs\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.458121 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-config-data\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.458856 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1deeab9-d70d-4591-a15a-1367cae92f3d-logs\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.462883 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-config-data\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.464807 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.479902 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmw2b\" (UniqueName: \"kubernetes.io/projected/c1deeab9-d70d-4591-a15a-1367cae92f3d-kube-api-access-qmw2b\") pod \"nova-api-0\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " pod="openstack/nova-api-0" Feb 28 09:22:58 crc kubenswrapper[4996]: I0228 09:22:58.655819 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:22:59 crc kubenswrapper[4996]: I0228 09:22:59.046177 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2eaaf97-646d-40f7-b1db-5d2110170b65" path="/var/lib/kubelet/pods/f2eaaf97-646d-40f7-b1db-5d2110170b65/volumes" Feb 28 09:22:59 crc kubenswrapper[4996]: I0228 09:22:59.102874 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 09:22:59 crc kubenswrapper[4996]: I0228 09:22:59.143185 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:22:59 crc kubenswrapper[4996]: I0228 09:22:59.260385 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1deeab9-d70d-4591-a15a-1367cae92f3d","Type":"ContainerStarted","Data":"b5be7e993ce53f8d1699bcac98511bc1ffc1ec63d5563388569aac5f6381d8fa"} Feb 28 09:22:59 crc kubenswrapper[4996]: I0228 09:22:59.512745 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 09:23:00 crc kubenswrapper[4996]: I0228 09:23:00.275950 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1deeab9-d70d-4591-a15a-1367cae92f3d","Type":"ContainerStarted","Data":"378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c"} Feb 28 09:23:00 crc kubenswrapper[4996]: I0228 09:23:00.276046 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1deeab9-d70d-4591-a15a-1367cae92f3d","Type":"ContainerStarted","Data":"6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938"} Feb 28 09:23:00 crc kubenswrapper[4996]: I0228 09:23:00.304299 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.304283099 podStartE2EDuration="2.304283099s" podCreationTimestamp="2026-02-28 09:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:00.303274595 +0000 UTC m=+1343.994077416" watchObservedRunningTime="2026-02-28 09:23:00.304283099 +0000 UTC m=+1343.995085910" Feb 28 09:23:00 crc kubenswrapper[4996]: I0228 09:23:00.961393 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:23:00 crc kubenswrapper[4996]: I0228 09:23:00.961579 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b0396f01-c84a-4562-a8e3-6f166d52d629" containerName="kube-state-metrics" containerID="cri-o://d5f6f46a7f6d9003eebe9c427d77bdba0fc4b0d09937fd196d48a71dbe7eb71e" gracePeriod=30 Feb 28 09:23:01 crc kubenswrapper[4996]: I0228 09:23:01.284802 4996 generic.go:334] "Generic (PLEG): container finished" podID="b0396f01-c84a-4562-a8e3-6f166d52d629" containerID="d5f6f46a7f6d9003eebe9c427d77bdba0fc4b0d09937fd196d48a71dbe7eb71e" exitCode=2 Feb 28 09:23:01 crc kubenswrapper[4996]: I0228 09:23:01.284863 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b0396f01-c84a-4562-a8e3-6f166d52d629","Type":"ContainerDied","Data":"d5f6f46a7f6d9003eebe9c427d77bdba0fc4b0d09937fd196d48a71dbe7eb71e"} Feb 28 09:23:01 crc kubenswrapper[4996]: I0228 09:23:01.471970 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 09:23:01 crc kubenswrapper[4996]: I0228 09:23:01.512210 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kr7fs\" (UniqueName: \"kubernetes.io/projected/b0396f01-c84a-4562-a8e3-6f166d52d629-kube-api-access-kr7fs\") pod \"b0396f01-c84a-4562-a8e3-6f166d52d629\" (UID: \"b0396f01-c84a-4562-a8e3-6f166d52d629\") " Feb 28 09:23:01 crc kubenswrapper[4996]: I0228 09:23:01.517775 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0396f01-c84a-4562-a8e3-6f166d52d629-kube-api-access-kr7fs" (OuterVolumeSpecName: "kube-api-access-kr7fs") pod "b0396f01-c84a-4562-a8e3-6f166d52d629" (UID: "b0396f01-c84a-4562-a8e3-6f166d52d629"). InnerVolumeSpecName "kube-api-access-kr7fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:01 crc kubenswrapper[4996]: I0228 09:23:01.614324 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kr7fs\" (UniqueName: \"kubernetes.io/projected/b0396f01-c84a-4562-a8e3-6f166d52d629-kube-api-access-kr7fs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.149163 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.149762 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="ceilometer-central-agent" containerID="cri-o://25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9" gracePeriod=30 Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.149837 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="proxy-httpd" containerID="cri-o://6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680" gracePeriod=30 Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.150064 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="ceilometer-notification-agent" containerID="cri-o://2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4" gracePeriod=30 Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.150141 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="sg-core" containerID="cri-o://3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7" gracePeriod=30 Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.297204 4996 generic.go:334] "Generic (PLEG): container finished" podID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerID="6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680" exitCode=0 Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.297249 4996 generic.go:334] "Generic (PLEG): container finished" podID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerID="3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7" exitCode=2 Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.297300 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerDied","Data":"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680"} Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.297361 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerDied","Data":"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7"} Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.299402 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b0396f01-c84a-4562-a8e3-6f166d52d629","Type":"ContainerDied","Data":"12f0d8e25091040b26ba26d162c235264cafeb28495739ad82c99ec2f4321466"} Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.299445 4996 scope.go:117] "RemoveContainer" containerID="d5f6f46a7f6d9003eebe9c427d77bdba0fc4b0d09937fd196d48a71dbe7eb71e" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.299480 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.360712 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.371228 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.384680 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:23:02 crc kubenswrapper[4996]: E0228 09:23:02.385033 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0396f01-c84a-4562-a8e3-6f166d52d629" containerName="kube-state-metrics" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.385048 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0396f01-c84a-4562-a8e3-6f166d52d629" containerName="kube-state-metrics" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.385217 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0396f01-c84a-4562-a8e3-6f166d52d629" containerName="kube-state-metrics" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.385907 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.390682 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.390687 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.398427 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.430506 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.430787 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.430919 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq8c6\" (UniqueName: \"kubernetes.io/projected/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-api-access-rq8c6\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.431058 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.533034 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.533161 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.533191 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq8c6\" (UniqueName: \"kubernetes.io/projected/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-api-access-rq8c6\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.533215 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.538060 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.538843 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.544506 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93e23f7f-31d2-496c-898d-4f46db4da6cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.577970 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq8c6\" (UniqueName: \"kubernetes.io/projected/93e23f7f-31d2-496c-898d-4f46db4da6cc-kube-api-access-rq8c6\") pod \"kube-state-metrics-0\" (UID: \"93e23f7f-31d2-496c-898d-4f46db4da6cc\") " pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.709635 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 09:23:02 crc kubenswrapper[4996]: I0228 09:23:02.991955 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.041716 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-config-data\") pod \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.041774 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88kdt\" (UniqueName: \"kubernetes.io/projected/3f83527f-6adb-44c7-8d28-79411d2f8aa2-kube-api-access-88kdt\") pod \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.041824 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-combined-ca-bundle\") pod \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.041892 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-sg-core-conf-yaml\") pod \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.041958 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-run-httpd\") pod \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.041984 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-log-httpd\") pod \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.042098 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-scripts\") pod \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\" (UID: \"3f83527f-6adb-44c7-8d28-79411d2f8aa2\") " Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.042734 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f83527f-6adb-44c7-8d28-79411d2f8aa2" (UID: "3f83527f-6adb-44c7-8d28-79411d2f8aa2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.044098 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f83527f-6adb-44c7-8d28-79411d2f8aa2" (UID: "3f83527f-6adb-44c7-8d28-79411d2f8aa2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.063751 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-scripts" (OuterVolumeSpecName: "scripts") pod "3f83527f-6adb-44c7-8d28-79411d2f8aa2" (UID: "3f83527f-6adb-44c7-8d28-79411d2f8aa2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.068155 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f83527f-6adb-44c7-8d28-79411d2f8aa2-kube-api-access-88kdt" (OuterVolumeSpecName: "kube-api-access-88kdt") pod "3f83527f-6adb-44c7-8d28-79411d2f8aa2" (UID: "3f83527f-6adb-44c7-8d28-79411d2f8aa2"). InnerVolumeSpecName "kube-api-access-88kdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.071732 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0396f01-c84a-4562-a8e3-6f166d52d629" path="/var/lib/kubelet/pods/b0396f01-c84a-4562-a8e3-6f166d52d629/volumes" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.107213 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f83527f-6adb-44c7-8d28-79411d2f8aa2" (UID: "3f83527f-6adb-44c7-8d28-79411d2f8aa2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.137750 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f83527f-6adb-44c7-8d28-79411d2f8aa2" (UID: "3f83527f-6adb-44c7-8d28-79411d2f8aa2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.144275 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.144305 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88kdt\" (UniqueName: \"kubernetes.io/projected/3f83527f-6adb-44c7-8d28-79411d2f8aa2-kube-api-access-88kdt\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.144315 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.144324 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.144331 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.144339 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f83527f-6adb-44c7-8d28-79411d2f8aa2-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.177087 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-config-data" (OuterVolumeSpecName: "config-data") pod "3f83527f-6adb-44c7-8d28-79411d2f8aa2" (UID: "3f83527f-6adb-44c7-8d28-79411d2f8aa2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:03 crc kubenswrapper[4996]: W0228 09:23:03.193491 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93e23f7f_31d2_496c_898d_4f46db4da6cc.slice/crio-2a663e738858455f80201fdd24eb5fed4064bfd4106a37ad832fafdb34b28488 WatchSource:0}: Error finding container 2a663e738858455f80201fdd24eb5fed4064bfd4106a37ad832fafdb34b28488: Status 404 returned error can't find the container with id 2a663e738858455f80201fdd24eb5fed4064bfd4106a37ad832fafdb34b28488 Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.194972 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.245522 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f83527f-6adb-44c7-8d28-79411d2f8aa2-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.312521 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"93e23f7f-31d2-496c-898d-4f46db4da6cc","Type":"ContainerStarted","Data":"2a663e738858455f80201fdd24eb5fed4064bfd4106a37ad832fafdb34b28488"} Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.319631 4996 generic.go:334] "Generic (PLEG): container finished" podID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerID="2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4" exitCode=0 Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.319663 4996 generic.go:334] "Generic (PLEG): container finished" podID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerID="25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9" exitCode=0 Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.319677 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.319697 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerDied","Data":"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4"} Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.319733 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerDied","Data":"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9"} Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.319749 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f83527f-6adb-44c7-8d28-79411d2f8aa2","Type":"ContainerDied","Data":"3c16bbff50c16d10d6a525c4a8a2b29c6296848fb9263c6ae1fa85e389b9bcf0"} Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.319768 4996 scope.go:117] "RemoveContainer" containerID="6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.362904 4996 scope.go:117] "RemoveContainer" containerID="3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.365555 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.383269 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.386275 4996 scope.go:117] "RemoveContainer" containerID="2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.395798 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:03 crc kubenswrapper[4996]: E0228 09:23:03.396145 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="ceilometer-central-agent" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.396157 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="ceilometer-central-agent" Feb 28 09:23:03 crc kubenswrapper[4996]: E0228 09:23:03.396172 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="proxy-httpd" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.396178 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="proxy-httpd" Feb 28 09:23:03 crc kubenswrapper[4996]: E0228 09:23:03.396199 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="sg-core" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.396206 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="sg-core" Feb 28 09:23:03 crc kubenswrapper[4996]: E0228 09:23:03.396220 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="ceilometer-notification-agent" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.396227 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="ceilometer-notification-agent" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.396405 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="ceilometer-central-agent" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.396426 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="sg-core" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.396438 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="proxy-httpd" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.396455 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" containerName="ceilometer-notification-agent" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.398110 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.406850 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.416991 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.417316 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.417413 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453104 4996 scope.go:117] "RemoveContainer" containerID="25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453111 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-log-httpd\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453265 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453346 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-scripts\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453515 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-config-data\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453614 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453644 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453702 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-run-httpd\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.453752 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kh7r\" (UniqueName: \"kubernetes.io/projected/b207b94c-d451-4978-937d-f7ac429d83ed-kube-api-access-5kh7r\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.478745 4996 scope.go:117] "RemoveContainer" containerID="6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680" Feb 28 09:23:03 crc kubenswrapper[4996]: E0228 09:23:03.479158 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680\": container with ID starting with 6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680 not found: ID does not exist" containerID="6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.479202 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680"} err="failed to get container status \"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680\": rpc error: code = NotFound desc = could not find container \"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680\": container with ID starting with 6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680 not found: ID does not exist" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.479233 4996 scope.go:117] "RemoveContainer" containerID="3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7" Feb 28 09:23:03 crc kubenswrapper[4996]: E0228 09:23:03.479665 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7\": container with ID starting with 3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7 not found: ID does not exist" containerID="3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.479703 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7"} err="failed to get container status \"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7\": rpc error: code = NotFound desc = could not find container \"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7\": container with ID starting with 3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7 not found: ID does not exist" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.479728 4996 scope.go:117] "RemoveContainer" containerID="2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4" Feb 28 09:23:03 crc kubenswrapper[4996]: E0228 09:23:03.480420 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4\": container with ID starting with 2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4 not found: ID does not exist" containerID="2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.480449 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4"} err="failed to get container status \"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4\": rpc error: code = NotFound desc = could not find container \"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4\": container with ID starting with 2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4 not found: ID does not exist" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.480468 4996 scope.go:117] "RemoveContainer" containerID="25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9" Feb 28 09:23:03 crc kubenswrapper[4996]: E0228 09:23:03.480735 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9\": container with ID starting with 25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9 not found: ID does not exist" containerID="25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.480762 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9"} err="failed to get container status \"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9\": rpc error: code = NotFound desc = could not find container \"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9\": container with ID starting with 25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9 not found: ID does not exist" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.480796 4996 scope.go:117] "RemoveContainer" containerID="6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.481107 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680"} err="failed to get container status \"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680\": rpc error: code = NotFound desc = could not find container \"6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680\": container with ID starting with 6e6b804ca87bcdc3bc35d9f5594744b852c063ccc1640354e94fcba7e3a79680 not found: ID does not exist" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.481142 4996 scope.go:117] "RemoveContainer" containerID="3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.481424 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7"} err="failed to get container status \"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7\": rpc error: code = NotFound desc = could not find container \"3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7\": container with ID starting with 3ab3e7bfebd6084c7cdd08edce5cb6a111bde8fc090550e23756b48541b1e7d7 not found: ID does not exist" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.481453 4996 scope.go:117] "RemoveContainer" containerID="2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.481725 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4"} err="failed to get container status \"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4\": rpc error: code = NotFound desc = could not find container \"2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4\": container with ID starting with 2c01ad420c3c7d75a327cc461c299d4bb1d6cefd5303f1f4b9cbd078191a84c4 not found: ID does not exist" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.481750 4996 scope.go:117] "RemoveContainer" containerID="25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.481970 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9"} err="failed to get container status \"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9\": rpc error: code = NotFound desc = could not find container \"25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9\": container with ID starting with 25c8091575bec465e9fc25a00bf49f9ebb29378d4965c796160191759203fde9 not found: ID does not exist" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.555221 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-scripts\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.555317 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-config-data\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.555366 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.555390 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.555413 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-run-httpd\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.555436 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kh7r\" (UniqueName: \"kubernetes.io/projected/b207b94c-d451-4978-937d-f7ac429d83ed-kube-api-access-5kh7r\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.555519 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-log-httpd\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.555544 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.556504 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-run-httpd\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.556721 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-log-httpd\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.559695 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.560074 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.560643 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-config-data\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.561332 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-scripts\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.575146 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.577389 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kh7r\" (UniqueName: \"kubernetes.io/projected/b207b94c-d451-4978-937d-f7ac429d83ed-kube-api-access-5kh7r\") pod \"ceilometer-0\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " pod="openstack/ceilometer-0" Feb 28 09:23:03 crc kubenswrapper[4996]: I0228 09:23:03.731520 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:04 crc kubenswrapper[4996]: I0228 09:23:04.167969 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:04 crc kubenswrapper[4996]: W0228 09:23:04.171983 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb207b94c_d451_4978_937d_f7ac429d83ed.slice/crio-ebf52e877de622821dafaea5087fa646dbb8a6556fd2730415e90c44a2d6f145 WatchSource:0}: Error finding container ebf52e877de622821dafaea5087fa646dbb8a6556fd2730415e90c44a2d6f145: Status 404 returned error can't find the container with id ebf52e877de622821dafaea5087fa646dbb8a6556fd2730415e90c44a2d6f145 Feb 28 09:23:04 crc kubenswrapper[4996]: I0228 09:23:04.335277 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"93e23f7f-31d2-496c-898d-4f46db4da6cc","Type":"ContainerStarted","Data":"4cdf26985b9dcfd4cf28d08d6b925d91b8cced1b60887ff69f88f0fead5a8072"} Feb 28 09:23:04 crc kubenswrapper[4996]: I0228 09:23:04.336090 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 28 09:23:04 crc kubenswrapper[4996]: I0228 09:23:04.336919 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerStarted","Data":"ebf52e877de622821dafaea5087fa646dbb8a6556fd2730415e90c44a2d6f145"} Feb 28 09:23:04 crc kubenswrapper[4996]: I0228 09:23:04.351155 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.932929616 podStartE2EDuration="2.351134503s" podCreationTimestamp="2026-02-28 09:23:02 +0000 UTC" firstStartedPulling="2026-02-28 09:23:03.195639624 +0000 UTC m=+1346.886442435" lastFinishedPulling="2026-02-28 09:23:03.613844511 +0000 UTC m=+1347.304647322" observedRunningTime="2026-02-28 09:23:04.349911974 +0000 UTC m=+1348.040714795" watchObservedRunningTime="2026-02-28 09:23:04.351134503 +0000 UTC m=+1348.041937314" Feb 28 09:23:04 crc kubenswrapper[4996]: I0228 09:23:04.513440 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 09:23:04 crc kubenswrapper[4996]: I0228 09:23:04.542618 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 09:23:05 crc kubenswrapper[4996]: I0228 09:23:05.041981 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f83527f-6adb-44c7-8d28-79411d2f8aa2" path="/var/lib/kubelet/pods/3f83527f-6adb-44c7-8d28-79411d2f8aa2/volumes" Feb 28 09:23:05 crc kubenswrapper[4996]: I0228 09:23:05.350870 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerStarted","Data":"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3"} Feb 28 09:23:05 crc kubenswrapper[4996]: I0228 09:23:05.384466 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 09:23:05 crc kubenswrapper[4996]: I0228 09:23:05.584521 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 28 09:23:06 crc kubenswrapper[4996]: I0228 09:23:06.361899 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerStarted","Data":"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b"} Feb 28 09:23:06 crc kubenswrapper[4996]: I0228 09:23:06.362333 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerStarted","Data":"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b"} Feb 28 09:23:08 crc kubenswrapper[4996]: I0228 09:23:08.378565 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerStarted","Data":"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4"} Feb 28 09:23:08 crc kubenswrapper[4996]: I0228 09:23:08.378995 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:23:08 crc kubenswrapper[4996]: I0228 09:23:08.404052 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.842238535 podStartE2EDuration="5.404021834s" podCreationTimestamp="2026-02-28 09:23:03 +0000 UTC" firstStartedPulling="2026-02-28 09:23:04.174202369 +0000 UTC m=+1347.865005180" lastFinishedPulling="2026-02-28 09:23:07.735985658 +0000 UTC m=+1351.426788479" observedRunningTime="2026-02-28 09:23:08.401419701 +0000 UTC m=+1352.092222532" watchObservedRunningTime="2026-02-28 09:23:08.404021834 +0000 UTC m=+1352.094824655" Feb 28 09:23:08 crc kubenswrapper[4996]: I0228 09:23:08.656392 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 09:23:08 crc kubenswrapper[4996]: I0228 09:23:08.656721 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 09:23:09 crc kubenswrapper[4996]: I0228 09:23:09.740181 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 09:23:09 crc kubenswrapper[4996]: I0228 09:23:09.740184 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 09:23:12 crc kubenswrapper[4996]: I0228 09:23:12.249118 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:23:12 crc kubenswrapper[4996]: I0228 09:23:12.249434 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:23:12 crc kubenswrapper[4996]: I0228 09:23:12.723916 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.399698 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.408698 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.453548 4996 generic.go:334] "Generic (PLEG): container finished" podID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerID="c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44" exitCode=137 Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.453634 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fe693ee9-0d2c-4b47-afd8-ef8317915fcc","Type":"ContainerDied","Data":"c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44"} Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.453627 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.453676 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fe693ee9-0d2c-4b47-afd8-ef8317915fcc","Type":"ContainerDied","Data":"23e10201ab43e47b4410adda6c4f17195ac31a601b0fa8822930550a10d86e8f"} Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.453700 4996 scope.go:117] "RemoveContainer" containerID="c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.456463 4996 generic.go:334] "Generic (PLEG): container finished" podID="e62fba78-9bdd-41bf-8cb9-7e21d68e963c" containerID="b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942" exitCode=137 Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.456515 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e62fba78-9bdd-41bf-8cb9-7e21d68e963c","Type":"ContainerDied","Data":"b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942"} Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.456551 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e62fba78-9bdd-41bf-8cb9-7e21d68e963c","Type":"ContainerDied","Data":"fd72b64ebef00416b6cfa04ecdc23d325596253af1c0a2866e9c1e22653fd2a1"} Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.456596 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.474394 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-combined-ca-bundle\") pod \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.474577 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-combined-ca-bundle\") pod \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.474667 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btdrb\" (UniqueName: \"kubernetes.io/projected/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-kube-api-access-btdrb\") pod \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.474709 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-config-data\") pod \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.474769 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-logs\") pod \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.474832 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-config-data\") pod \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\" (UID: \"e62fba78-9bdd-41bf-8cb9-7e21d68e963c\") " Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.474878 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxswb\" (UniqueName: \"kubernetes.io/projected/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-kube-api-access-sxswb\") pod \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\" (UID: \"fe693ee9-0d2c-4b47-afd8-ef8317915fcc\") " Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.475559 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-logs" (OuterVolumeSpecName: "logs") pod "fe693ee9-0d2c-4b47-afd8-ef8317915fcc" (UID: "fe693ee9-0d2c-4b47-afd8-ef8317915fcc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.480371 4996 scope.go:117] "RemoveContainer" containerID="26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.482034 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-kube-api-access-btdrb" (OuterVolumeSpecName: "kube-api-access-btdrb") pod "e62fba78-9bdd-41bf-8cb9-7e21d68e963c" (UID: "e62fba78-9bdd-41bf-8cb9-7e21d68e963c"). InnerVolumeSpecName "kube-api-access-btdrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.487046 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-kube-api-access-sxswb" (OuterVolumeSpecName: "kube-api-access-sxswb") pod "fe693ee9-0d2c-4b47-afd8-ef8317915fcc" (UID: "fe693ee9-0d2c-4b47-afd8-ef8317915fcc"). InnerVolumeSpecName "kube-api-access-sxswb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.502092 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe693ee9-0d2c-4b47-afd8-ef8317915fcc" (UID: "fe693ee9-0d2c-4b47-afd8-ef8317915fcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.504565 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e62fba78-9bdd-41bf-8cb9-7e21d68e963c" (UID: "e62fba78-9bdd-41bf-8cb9-7e21d68e963c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.505017 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-config-data" (OuterVolumeSpecName: "config-data") pod "fe693ee9-0d2c-4b47-afd8-ef8317915fcc" (UID: "fe693ee9-0d2c-4b47-afd8-ef8317915fcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.509145 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-config-data" (OuterVolumeSpecName: "config-data") pod "e62fba78-9bdd-41bf-8cb9-7e21d68e963c" (UID: "e62fba78-9bdd-41bf-8cb9-7e21d68e963c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.577648 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.577685 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btdrb\" (UniqueName: \"kubernetes.io/projected/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-kube-api-access-btdrb\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.577698 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.577707 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.577716 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.577724 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxswb\" (UniqueName: \"kubernetes.io/projected/fe693ee9-0d2c-4b47-afd8-ef8317915fcc-kube-api-access-sxswb\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.577732 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e62fba78-9bdd-41bf-8cb9-7e21d68e963c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.602732 4996 scope.go:117] "RemoveContainer" containerID="c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44" Feb 28 09:23:15 crc kubenswrapper[4996]: E0228 09:23:15.603154 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44\": container with ID starting with c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44 not found: ID does not exist" containerID="c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.603189 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44"} err="failed to get container status \"c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44\": rpc error: code = NotFound desc = could not find container \"c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44\": container with ID starting with c822722facf6da644cb08b37bd73f58968cb10f95f34ae182fefbcaefd2c8c44 not found: ID does not exist" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.603211 4996 scope.go:117] "RemoveContainer" containerID="26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28" Feb 28 09:23:15 crc kubenswrapper[4996]: E0228 09:23:15.603427 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28\": container with ID starting with 26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28 not found: ID does not exist" containerID="26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.603459 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28"} err="failed to get container status \"26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28\": rpc error: code = NotFound desc = could not find container \"26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28\": container with ID starting with 26fcd6dea224ff30d5c6b7459e68c092dafcababd4df94d201863a1ba3ac7a28 not found: ID does not exist" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.603480 4996 scope.go:117] "RemoveContainer" containerID="b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.627794 4996 scope.go:117] "RemoveContainer" containerID="b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942" Feb 28 09:23:15 crc kubenswrapper[4996]: E0228 09:23:15.628392 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942\": container with ID starting with b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942 not found: ID does not exist" containerID="b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.628463 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942"} err="failed to get container status \"b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942\": rpc error: code = NotFound desc = could not find container \"b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942\": container with ID starting with b5bddea41c4161bef82f9dc64ccd5bb642cfd00fa4cc3eab9eb4df2d48a3c942 not found: ID does not exist" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.810217 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.830823 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.844239 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.854484 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.912686 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:23:15 crc kubenswrapper[4996]: E0228 09:23:15.913077 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62fba78-9bdd-41bf-8cb9-7e21d68e963c" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.913092 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62fba78-9bdd-41bf-8cb9-7e21d68e963c" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 09:23:15 crc kubenswrapper[4996]: E0228 09:23:15.913104 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerName="nova-metadata-metadata" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.913110 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerName="nova-metadata-metadata" Feb 28 09:23:15 crc kubenswrapper[4996]: E0228 09:23:15.913124 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerName="nova-metadata-log" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.913130 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerName="nova-metadata-log" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.913294 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerName="nova-metadata-log" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.913304 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62fba78-9bdd-41bf-8cb9-7e21d68e963c" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.913320 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" containerName="nova-metadata-metadata" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.913841 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.918106 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.918239 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.918298 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.927276 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.929309 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.931136 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.931878 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.939143 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.950785 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.985165 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-828q9\" (UniqueName: \"kubernetes.io/projected/28a8ab76-f177-47a0-8b6c-9f8c75739b30-kube-api-access-828q9\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.985423 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.985579 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-config-data\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.985700 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9a34f8-7f3c-41ff-896c-328fe217c902-logs\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.985817 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.985925 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.986093 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.986249 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5plg\" (UniqueName: \"kubernetes.io/projected/7b9a34f8-7f3c-41ff-896c-328fe217c902-kube-api-access-v5plg\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.986366 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:15 crc kubenswrapper[4996]: I0228 09:23:15.986473 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088492 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-828q9\" (UniqueName: \"kubernetes.io/projected/28a8ab76-f177-47a0-8b6c-9f8c75739b30-kube-api-access-828q9\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088561 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088595 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-config-data\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088642 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9a34f8-7f3c-41ff-896c-328fe217c902-logs\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088665 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088682 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088757 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088811 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5plg\" (UniqueName: \"kubernetes.io/projected/7b9a34f8-7f3c-41ff-896c-328fe217c902-kube-api-access-v5plg\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088837 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.088866 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.089933 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9a34f8-7f3c-41ff-896c-328fe217c902-logs\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.093695 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.094042 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.094470 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.097141 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-config-data\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.097410 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.097556 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.097644 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28a8ab76-f177-47a0-8b6c-9f8c75739b30-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.105729 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-828q9\" (UniqueName: \"kubernetes.io/projected/28a8ab76-f177-47a0-8b6c-9f8c75739b30-kube-api-access-828q9\") pod \"nova-cell1-novncproxy-0\" (UID: \"28a8ab76-f177-47a0-8b6c-9f8c75739b30\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.108402 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5plg\" (UniqueName: \"kubernetes.io/projected/7b9a34f8-7f3c-41ff-896c-328fe217c902-kube-api-access-v5plg\") pod \"nova-metadata-0\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.233238 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.248954 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.534674 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 09:23:16 crc kubenswrapper[4996]: I0228 09:23:16.598740 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:16 crc kubenswrapper[4996]: W0228 09:23:16.608094 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b9a34f8_7f3c_41ff_896c_328fe217c902.slice/crio-1a569e66dbd966973741742883c63420d5039a16edbde4943d46fe176be75d2d WatchSource:0}: Error finding container 1a569e66dbd966973741742883c63420d5039a16edbde4943d46fe176be75d2d: Status 404 returned error can't find the container with id 1a569e66dbd966973741742883c63420d5039a16edbde4943d46fe176be75d2d Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.043756 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62fba78-9bdd-41bf-8cb9-7e21d68e963c" path="/var/lib/kubelet/pods/e62fba78-9bdd-41bf-8cb9-7e21d68e963c/volumes" Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.044505 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe693ee9-0d2c-4b47-afd8-ef8317915fcc" path="/var/lib/kubelet/pods/fe693ee9-0d2c-4b47-afd8-ef8317915fcc/volumes" Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.480299 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b9a34f8-7f3c-41ff-896c-328fe217c902","Type":"ContainerStarted","Data":"c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4"} Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.480612 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b9a34f8-7f3c-41ff-896c-328fe217c902","Type":"ContainerStarted","Data":"a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66"} Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.480627 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b9a34f8-7f3c-41ff-896c-328fe217c902","Type":"ContainerStarted","Data":"1a569e66dbd966973741742883c63420d5039a16edbde4943d46fe176be75d2d"} Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.482389 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"28a8ab76-f177-47a0-8b6c-9f8c75739b30","Type":"ContainerStarted","Data":"ce520364a2808e9ce4c295b3ccfb909672c39d5dbc0f478152d1064e2cd8e277"} Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.482425 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"28a8ab76-f177-47a0-8b6c-9f8c75739b30","Type":"ContainerStarted","Data":"5d0dcf682209e2b763b61bf4b430f38c98ef74b5981c3df702bf656a13297b8a"} Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.520617 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.520597231 podStartE2EDuration="2.520597231s" podCreationTimestamp="2026-02-28 09:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:17.503484852 +0000 UTC m=+1361.194287673" watchObservedRunningTime="2026-02-28 09:23:17.520597231 +0000 UTC m=+1361.211400042" Feb 28 09:23:17 crc kubenswrapper[4996]: I0228 09:23:17.539942 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.539926593 podStartE2EDuration="2.539926593s" podCreationTimestamp="2026-02-28 09:23:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:17.529523508 +0000 UTC m=+1361.220326339" watchObservedRunningTime="2026-02-28 09:23:17.539926593 +0000 UTC m=+1361.230729404" Feb 28 09:23:18 crc kubenswrapper[4996]: I0228 09:23:18.660918 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 09:23:18 crc kubenswrapper[4996]: I0228 09:23:18.661575 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 09:23:18 crc kubenswrapper[4996]: I0228 09:23:18.664509 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 09:23:18 crc kubenswrapper[4996]: I0228 09:23:18.664825 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.499591 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.503546 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.707895 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ghcc5"] Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.712606 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.727566 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ghcc5"] Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.867473 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.867533 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-config\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.867598 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-dns-svc\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.867756 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s295b\" (UniqueName: \"kubernetes.io/projected/25f68b33-2062-44d2-bf66-0220e8e98a58-kube-api-access-s295b\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.868114 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.969930 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s295b\" (UniqueName: \"kubernetes.io/projected/25f68b33-2062-44d2-bf66-0220e8e98a58-kube-api-access-s295b\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.970288 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.970403 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.970489 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-config\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.970586 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-dns-svc\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.971307 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.971655 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-config\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.971790 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-dns-svc\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:19 crc kubenswrapper[4996]: I0228 09:23:19.972365 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:20 crc kubenswrapper[4996]: I0228 09:23:20.003661 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s295b\" (UniqueName: \"kubernetes.io/projected/25f68b33-2062-44d2-bf66-0220e8e98a58-kube-api-access-s295b\") pod \"dnsmasq-dns-5b856c5697-ghcc5\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:20 crc kubenswrapper[4996]: I0228 09:23:20.094103 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:20 crc kubenswrapper[4996]: I0228 09:23:20.600263 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ghcc5"] Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.233671 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.249493 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.249556 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.514967 4996 generic.go:334] "Generic (PLEG): container finished" podID="25f68b33-2062-44d2-bf66-0220e8e98a58" containerID="87df899efa90f6097084935a1a353980cfcee8de85bb647c1e68f59e5224af31" exitCode=0 Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.515053 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" event={"ID":"25f68b33-2062-44d2-bf66-0220e8e98a58","Type":"ContainerDied","Data":"87df899efa90f6097084935a1a353980cfcee8de85bb647c1e68f59e5224af31"} Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.515119 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" event={"ID":"25f68b33-2062-44d2-bf66-0220e8e98a58","Type":"ContainerStarted","Data":"295d15878ba32e19b329d4484a4db0bc42c73b54eba2d1a3df923751b4fe86c7"} Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.771636 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.927437 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.927757 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="ceilometer-central-agent" containerID="cri-o://251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3" gracePeriod=30 Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.927893 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="proxy-httpd" containerID="cri-o://15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4" gracePeriod=30 Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.927943 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="sg-core" containerID="cri-o://2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b" gracePeriod=30 Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.927974 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="ceilometer-notification-agent" containerID="cri-o://450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b" gracePeriod=30 Feb 28 09:23:21 crc kubenswrapper[4996]: I0228 09:23:21.941453 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.194:3000/\": read tcp 10.217.0.2:59438->10.217.0.194:3000: read: connection reset by peer" Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.532312 4996 generic.go:334] "Generic (PLEG): container finished" podID="b207b94c-d451-4978-937d-f7ac429d83ed" containerID="15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4" exitCode=0 Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.532344 4996 generic.go:334] "Generic (PLEG): container finished" podID="b207b94c-d451-4978-937d-f7ac429d83ed" containerID="2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b" exitCode=2 Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.532368 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerDied","Data":"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4"} Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.532410 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerDied","Data":"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b"} Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.541835 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-log" containerID="cri-o://6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938" gracePeriod=30 Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.542340 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-api" containerID="cri-o://378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c" gracePeriod=30 Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.542731 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" event={"ID":"25f68b33-2062-44d2-bf66-0220e8e98a58","Type":"ContainerStarted","Data":"861bf2f5723fcc3cabceb1af29dfdf2c89b37e0b6070d6b3e8ab87702cd7f064"} Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.543218 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.572778 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" podStartSLOduration=3.5727364379999997 podStartE2EDuration="3.572736438s" podCreationTimestamp="2026-02-28 09:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:22.571165379 +0000 UTC m=+1366.261968190" watchObservedRunningTime="2026-02-28 09:23:22.572736438 +0000 UTC m=+1366.263539239" Feb 28 09:23:22 crc kubenswrapper[4996]: I0228 09:23:22.974811 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.126900 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-scripts\") pod \"b207b94c-d451-4978-937d-f7ac429d83ed\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127070 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-combined-ca-bundle\") pod \"b207b94c-d451-4978-937d-f7ac429d83ed\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127108 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kh7r\" (UniqueName: \"kubernetes.io/projected/b207b94c-d451-4978-937d-f7ac429d83ed-kube-api-access-5kh7r\") pod \"b207b94c-d451-4978-937d-f7ac429d83ed\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127175 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-sg-core-conf-yaml\") pod \"b207b94c-d451-4978-937d-f7ac429d83ed\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127303 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-log-httpd\") pod \"b207b94c-d451-4978-937d-f7ac429d83ed\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127332 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-ceilometer-tls-certs\") pod \"b207b94c-d451-4978-937d-f7ac429d83ed\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127370 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-run-httpd\") pod \"b207b94c-d451-4978-937d-f7ac429d83ed\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127391 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-config-data\") pod \"b207b94c-d451-4978-937d-f7ac429d83ed\" (UID: \"b207b94c-d451-4978-937d-f7ac429d83ed\") " Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127754 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b207b94c-d451-4978-937d-f7ac429d83ed" (UID: "b207b94c-d451-4978-937d-f7ac429d83ed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.127906 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b207b94c-d451-4978-937d-f7ac429d83ed" (UID: "b207b94c-d451-4978-937d-f7ac429d83ed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.128268 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.128332 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b207b94c-d451-4978-937d-f7ac429d83ed-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.135693 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b207b94c-d451-4978-937d-f7ac429d83ed-kube-api-access-5kh7r" (OuterVolumeSpecName: "kube-api-access-5kh7r") pod "b207b94c-d451-4978-937d-f7ac429d83ed" (UID: "b207b94c-d451-4978-937d-f7ac429d83ed"). InnerVolumeSpecName "kube-api-access-5kh7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.135751 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-scripts" (OuterVolumeSpecName: "scripts") pod "b207b94c-d451-4978-937d-f7ac429d83ed" (UID: "b207b94c-d451-4978-937d-f7ac429d83ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.162000 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b207b94c-d451-4978-937d-f7ac429d83ed" (UID: "b207b94c-d451-4978-937d-f7ac429d83ed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.187449 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b207b94c-d451-4978-937d-f7ac429d83ed" (UID: "b207b94c-d451-4978-937d-f7ac429d83ed"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.230067 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b207b94c-d451-4978-937d-f7ac429d83ed" (UID: "b207b94c-d451-4978-937d-f7ac429d83ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.231242 4996 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.231804 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.231891 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.231958 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kh7r\" (UniqueName: \"kubernetes.io/projected/b207b94c-d451-4978-937d-f7ac429d83ed-kube-api-access-5kh7r\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.232052 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.262758 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-config-data" (OuterVolumeSpecName: "config-data") pod "b207b94c-d451-4978-937d-f7ac429d83ed" (UID: "b207b94c-d451-4978-937d-f7ac429d83ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.333368 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b207b94c-d451-4978-937d-f7ac429d83ed-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.552134 4996 generic.go:334] "Generic (PLEG): container finished" podID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerID="6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938" exitCode=143 Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.552253 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1deeab9-d70d-4591-a15a-1367cae92f3d","Type":"ContainerDied","Data":"6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938"} Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.555807 4996 generic.go:334] "Generic (PLEG): container finished" podID="b207b94c-d451-4978-937d-f7ac429d83ed" containerID="450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b" exitCode=0 Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.555846 4996 generic.go:334] "Generic (PLEG): container finished" podID="b207b94c-d451-4978-937d-f7ac429d83ed" containerID="251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3" exitCode=0 Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.555935 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerDied","Data":"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b"} Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.555983 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerDied","Data":"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3"} Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.556002 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b207b94c-d451-4978-937d-f7ac429d83ed","Type":"ContainerDied","Data":"ebf52e877de622821dafaea5087fa646dbb8a6556fd2730415e90c44a2d6f145"} Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.556037 4996 scope.go:117] "RemoveContainer" containerID="15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.556303 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.583952 4996 scope.go:117] "RemoveContainer" containerID="2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.590316 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.598066 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.613484 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:23 crc kubenswrapper[4996]: E0228 09:23:23.613809 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="ceilometer-central-agent" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.613825 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="ceilometer-central-agent" Feb 28 09:23:23 crc kubenswrapper[4996]: E0228 09:23:23.613843 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="sg-core" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.613850 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="sg-core" Feb 28 09:23:23 crc kubenswrapper[4996]: E0228 09:23:23.613858 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="proxy-httpd" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.613876 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="proxy-httpd" Feb 28 09:23:23 crc kubenswrapper[4996]: E0228 09:23:23.613889 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="ceilometer-notification-agent" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.613896 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="ceilometer-notification-agent" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.614075 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="ceilometer-central-agent" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.614089 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="proxy-httpd" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.614102 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="ceilometer-notification-agent" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.614111 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" containerName="sg-core" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.615585 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.617535 4996 scope.go:117] "RemoveContainer" containerID="450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.617891 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.618302 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.624467 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.643785 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.647057 4996 scope.go:117] "RemoveContainer" containerID="251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.667500 4996 scope.go:117] "RemoveContainer" containerID="15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4" Feb 28 09:23:23 crc kubenswrapper[4996]: E0228 09:23:23.668069 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4\": container with ID starting with 15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4 not found: ID does not exist" containerID="15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.668098 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4"} err="failed to get container status \"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4\": rpc error: code = NotFound desc = could not find container \"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4\": container with ID starting with 15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4 not found: ID does not exist" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.668119 4996 scope.go:117] "RemoveContainer" containerID="2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b" Feb 28 09:23:23 crc kubenswrapper[4996]: E0228 09:23:23.668717 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b\": container with ID starting with 2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b not found: ID does not exist" containerID="2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.668737 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b"} err="failed to get container status \"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b\": rpc error: code = NotFound desc = could not find container \"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b\": container with ID starting with 2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b not found: ID does not exist" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.668749 4996 scope.go:117] "RemoveContainer" containerID="450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b" Feb 28 09:23:23 crc kubenswrapper[4996]: E0228 09:23:23.669231 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b\": container with ID starting with 450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b not found: ID does not exist" containerID="450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.669250 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b"} err="failed to get container status \"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b\": rpc error: code = NotFound desc = could not find container \"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b\": container with ID starting with 450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b not found: ID does not exist" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.669261 4996 scope.go:117] "RemoveContainer" containerID="251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3" Feb 28 09:23:23 crc kubenswrapper[4996]: E0228 09:23:23.669453 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3\": container with ID starting with 251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3 not found: ID does not exist" containerID="251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.669470 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3"} err="failed to get container status \"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3\": rpc error: code = NotFound desc = could not find container \"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3\": container with ID starting with 251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3 not found: ID does not exist" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.669481 4996 scope.go:117] "RemoveContainer" containerID="15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.669738 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4"} err="failed to get container status \"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4\": rpc error: code = NotFound desc = could not find container \"15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4\": container with ID starting with 15221f1a7fca862f0229b7c3be65eaf39f23d2ef5f2f2c6f021a3fdf5af1a4f4 not found: ID does not exist" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.669756 4996 scope.go:117] "RemoveContainer" containerID="2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.670098 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b"} err="failed to get container status \"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b\": rpc error: code = NotFound desc = could not find container \"2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b\": container with ID starting with 2282a6c831e43140ddaea709260ef63adf6192e52df7546e8238ebe2025e720b not found: ID does not exist" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.670117 4996 scope.go:117] "RemoveContainer" containerID="450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.670385 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b"} err="failed to get container status \"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b\": rpc error: code = NotFound desc = could not find container \"450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b\": container with ID starting with 450b9f4ca291e7b2d712458d8fa5a2716f016f3b5b2be1a28f04582edf2fcd6b not found: ID does not exist" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.670403 4996 scope.go:117] "RemoveContainer" containerID="251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.670606 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3"} err="failed to get container status \"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3\": rpc error: code = NotFound desc = could not find container \"251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3\": container with ID starting with 251aa293e7b8488e5b0ebf0b52e63849e076be20955704fa7523a9e1a4578bb3 not found: ID does not exist" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.740930 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.741070 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-run-httpd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.741133 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.741280 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-log-httpd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.741494 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-config-data\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.741809 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd2wd\" (UniqueName: \"kubernetes.io/projected/a342b431-f1db-49c8-844f-f56cfc0914e1-kube-api-access-gd2wd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.741942 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.742063 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-scripts\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.844331 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd2wd\" (UniqueName: \"kubernetes.io/projected/a342b431-f1db-49c8-844f-f56cfc0914e1-kube-api-access-gd2wd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.844777 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.845383 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-scripts\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.845452 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.845520 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-run-httpd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.845576 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.845665 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-log-httpd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.845745 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-config-data\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.846288 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-log-httpd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.846619 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-run-httpd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.848934 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.849632 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-scripts\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.851712 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.851900 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-config-data\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.853493 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.869335 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd2wd\" (UniqueName: \"kubernetes.io/projected/a342b431-f1db-49c8-844f-f56cfc0914e1-kube-api-access-gd2wd\") pod \"ceilometer-0\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " pod="openstack/ceilometer-0" Feb 28 09:23:23 crc kubenswrapper[4996]: I0228 09:23:23.946356 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:24 crc kubenswrapper[4996]: I0228 09:23:24.253744 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:24 crc kubenswrapper[4996]: I0228 09:23:24.414471 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:24 crc kubenswrapper[4996]: I0228 09:23:24.566997 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerStarted","Data":"f446349934c77ed13a0b9a83117153eec486df17e8d2e778618e160ebc6bad6e"} Feb 28 09:23:25 crc kubenswrapper[4996]: I0228 09:23:25.047762 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b207b94c-d451-4978-937d-f7ac429d83ed" path="/var/lib/kubelet/pods/b207b94c-d451-4978-937d-f7ac429d83ed/volumes" Feb 28 09:23:25 crc kubenswrapper[4996]: I0228 09:23:25.579133 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerStarted","Data":"b42de08d3c7b691214910732dfac414de45242b41c43723f5ea011b3fd82aabf"} Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.233597 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.246637 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.249923 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.249960 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.256111 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.391794 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-combined-ca-bundle\") pod \"c1deeab9-d70d-4591-a15a-1367cae92f3d\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.392309 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1deeab9-d70d-4591-a15a-1367cae92f3d-logs\") pod \"c1deeab9-d70d-4591-a15a-1367cae92f3d\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.392366 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmw2b\" (UniqueName: \"kubernetes.io/projected/c1deeab9-d70d-4591-a15a-1367cae92f3d-kube-api-access-qmw2b\") pod \"c1deeab9-d70d-4591-a15a-1367cae92f3d\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.392424 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-config-data\") pod \"c1deeab9-d70d-4591-a15a-1367cae92f3d\" (UID: \"c1deeab9-d70d-4591-a15a-1367cae92f3d\") " Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.428791 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1deeab9-d70d-4591-a15a-1367cae92f3d-logs" (OuterVolumeSpecName: "logs") pod "c1deeab9-d70d-4591-a15a-1367cae92f3d" (UID: "c1deeab9-d70d-4591-a15a-1367cae92f3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.431280 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1deeab9-d70d-4591-a15a-1367cae92f3d-kube-api-access-qmw2b" (OuterVolumeSpecName: "kube-api-access-qmw2b") pod "c1deeab9-d70d-4591-a15a-1367cae92f3d" (UID: "c1deeab9-d70d-4591-a15a-1367cae92f3d"). InnerVolumeSpecName "kube-api-access-qmw2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.440460 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-config-data" (OuterVolumeSpecName: "config-data") pod "c1deeab9-d70d-4591-a15a-1367cae92f3d" (UID: "c1deeab9-d70d-4591-a15a-1367cae92f3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.449160 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1deeab9-d70d-4591-a15a-1367cae92f3d" (UID: "c1deeab9-d70d-4591-a15a-1367cae92f3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.494025 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.494065 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c1deeab9-d70d-4591-a15a-1367cae92f3d-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.494079 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmw2b\" (UniqueName: \"kubernetes.io/projected/c1deeab9-d70d-4591-a15a-1367cae92f3d-kube-api-access-qmw2b\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.494094 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1deeab9-d70d-4591-a15a-1367cae92f3d-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.596136 4996 generic.go:334] "Generic (PLEG): container finished" podID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerID="378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c" exitCode=0 Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.596212 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1deeab9-d70d-4591-a15a-1367cae92f3d","Type":"ContainerDied","Data":"378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c"} Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.596244 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c1deeab9-d70d-4591-a15a-1367cae92f3d","Type":"ContainerDied","Data":"b5be7e993ce53f8d1699bcac98511bc1ffc1ec63d5563388569aac5f6381d8fa"} Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.596264 4996 scope.go:117] "RemoveContainer" containerID="378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.596464 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.623502 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerStarted","Data":"119a899d497bc6996943570b4bdc3b4fe1fb9c16b047258ef8a670e7ee859c07"} Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.645212 4996 scope.go:117] "RemoveContainer" containerID="6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.690090 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.708807 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.719530 4996 scope.go:117] "RemoveContainer" containerID="378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c" Feb 28 09:23:26 crc kubenswrapper[4996]: E0228 09:23:26.719912 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c\": container with ID starting with 378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c not found: ID does not exist" containerID="378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.719956 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c"} err="failed to get container status \"378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c\": rpc error: code = NotFound desc = could not find container \"378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c\": container with ID starting with 378dbcf384f8ef9d89c6c1e756cbc33e6a9460172c32b0ed2e6358f4bb743a5c not found: ID does not exist" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.719981 4996 scope.go:117] "RemoveContainer" containerID="6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938" Feb 28 09:23:26 crc kubenswrapper[4996]: E0228 09:23:26.720315 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938\": container with ID starting with 6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938 not found: ID does not exist" containerID="6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.720351 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938"} err="failed to get container status \"6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938\": rpc error: code = NotFound desc = could not find container \"6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938\": container with ID starting with 6a1c10bd05cd51ee9940503ab5732f2769ccc386cd6cc473afa9354347672938 not found: ID does not exist" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.730198 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:26 crc kubenswrapper[4996]: E0228 09:23:26.730558 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-log" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.730589 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-log" Feb 28 09:23:26 crc kubenswrapper[4996]: E0228 09:23:26.730622 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-api" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.730628 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-api" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.730791 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-api" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.730811 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" containerName="nova-api-log" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.731664 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.733951 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.737094 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.737263 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.737362 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.755387 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.799523 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmqzs\" (UniqueName: \"kubernetes.io/projected/64529d7f-92ea-4fcd-8293-165f79009b53-kube-api-access-nmqzs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.799674 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-internal-tls-certs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.799764 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.799846 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-config-data\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.799871 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64529d7f-92ea-4fcd-8293-165f79009b53-logs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.799946 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-public-tls-certs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.901195 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.901256 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-config-data\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.901281 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64529d7f-92ea-4fcd-8293-165f79009b53-logs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.901312 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-public-tls-certs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.901363 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmqzs\" (UniqueName: \"kubernetes.io/projected/64529d7f-92ea-4fcd-8293-165f79009b53-kube-api-access-nmqzs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.901401 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-internal-tls-certs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.901950 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64529d7f-92ea-4fcd-8293-165f79009b53-logs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.903890 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xwlrs"] Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.904500 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.905233 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.905273 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-config-data\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.905444 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-internal-tls-certs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.907374 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.907410 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.921546 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-public-tls-certs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.925641 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xwlrs"] Feb 28 09:23:26 crc kubenswrapper[4996]: I0228 09:23:26.933798 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmqzs\" (UniqueName: \"kubernetes.io/projected/64529d7f-92ea-4fcd-8293-165f79009b53-kube-api-access-nmqzs\") pod \"nova-api-0\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " pod="openstack/nova-api-0" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.003071 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-scripts\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.003156 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.003222 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smrmr\" (UniqueName: \"kubernetes.io/projected/2aca7df8-ff8f-457d-b65b-fba7e0eed249-kube-api-access-smrmr\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.003249 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-config-data\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.043489 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1deeab9-d70d-4591-a15a-1367cae92f3d" path="/var/lib/kubelet/pods/c1deeab9-d70d-4591-a15a-1367cae92f3d/volumes" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.068631 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.104445 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smrmr\" (UniqueName: \"kubernetes.io/projected/2aca7df8-ff8f-457d-b65b-fba7e0eed249-kube-api-access-smrmr\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.104510 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-config-data\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.104645 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-scripts\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.104716 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.114764 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-config-data\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.115183 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.116212 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-scripts\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.123702 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smrmr\" (UniqueName: \"kubernetes.io/projected/2aca7df8-ff8f-457d-b65b-fba7e0eed249-kube-api-access-smrmr\") pod \"nova-cell1-cell-mapping-xwlrs\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.264241 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.264995 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.316270 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.589292 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.661135 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64529d7f-92ea-4fcd-8293-165f79009b53","Type":"ContainerStarted","Data":"009dc52316b5e9b0bc196fb22d3be49ca4ec102c7b6fc9c51417cba56ed08602"} Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.675854 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerStarted","Data":"5842d63cebc91cbb27f741512f53000627a43d51cfd33583e8e5adf3ef05e480"} Feb 28 09:23:27 crc kubenswrapper[4996]: W0228 09:23:27.828386 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2aca7df8_ff8f_457d_b65b_fba7e0eed249.slice/crio-15f0d363dddd281c686b2db48f4bf244cd3f2d76ea1154e5f3891d3a905489b2 WatchSource:0}: Error finding container 15f0d363dddd281c686b2db48f4bf244cd3f2d76ea1154e5f3891d3a905489b2: Status 404 returned error can't find the container with id 15f0d363dddd281c686b2db48f4bf244cd3f2d76ea1154e5f3891d3a905489b2 Feb 28 09:23:27 crc kubenswrapper[4996]: I0228 09:23:27.831097 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xwlrs"] Feb 28 09:23:28 crc kubenswrapper[4996]: I0228 09:23:28.696673 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64529d7f-92ea-4fcd-8293-165f79009b53","Type":"ContainerStarted","Data":"d85e234bc0f7fe77d2f9e422922a93c93bff3aed8a26db896c99d433f72dc283"} Feb 28 09:23:28 crc kubenswrapper[4996]: I0228 09:23:28.697264 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64529d7f-92ea-4fcd-8293-165f79009b53","Type":"ContainerStarted","Data":"38665342b2e01698d06896cb009a3d6b7f829132b519683dde926d33493bd6e1"} Feb 28 09:23:28 crc kubenswrapper[4996]: I0228 09:23:28.703413 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xwlrs" event={"ID":"2aca7df8-ff8f-457d-b65b-fba7e0eed249","Type":"ContainerStarted","Data":"fd159c64ce4b593bdda85f84be1e153076f047d334a404cb0343cdf5a6a1446c"} Feb 28 09:23:28 crc kubenswrapper[4996]: I0228 09:23:28.703452 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xwlrs" event={"ID":"2aca7df8-ff8f-457d-b65b-fba7e0eed249","Type":"ContainerStarted","Data":"15f0d363dddd281c686b2db48f4bf244cd3f2d76ea1154e5f3891d3a905489b2"} Feb 28 09:23:28 crc kubenswrapper[4996]: I0228 09:23:28.716577 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.716562454 podStartE2EDuration="2.716562454s" podCreationTimestamp="2026-02-28 09:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:28.713928101 +0000 UTC m=+1372.404730912" watchObservedRunningTime="2026-02-28 09:23:28.716562454 +0000 UTC m=+1372.407365265" Feb 28 09:23:28 crc kubenswrapper[4996]: I0228 09:23:28.750935 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xwlrs" podStartSLOduration=2.750918635 podStartE2EDuration="2.750918635s" podCreationTimestamp="2026-02-28 09:23:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:28.746296182 +0000 UTC m=+1372.437098993" watchObservedRunningTime="2026-02-28 09:23:28.750918635 +0000 UTC m=+1372.441721446" Feb 28 09:23:30 crc kubenswrapper[4996]: I0228 09:23:30.095186 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:23:30 crc kubenswrapper[4996]: I0228 09:23:30.217412 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-pm8t6"] Feb 28 09:23:30 crc kubenswrapper[4996]: I0228 09:23:30.217744 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" podUID="b391b287-fc4a-437e-b157-db5e86661249" containerName="dnsmasq-dns" containerID="cri-o://646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea" gracePeriod=10 Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.325001 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.396166 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-config\") pod \"b391b287-fc4a-437e-b157-db5e86661249\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.396257 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-sb\") pod \"b391b287-fc4a-437e-b157-db5e86661249\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.396381 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-dns-svc\") pod \"b391b287-fc4a-437e-b157-db5e86661249\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.396487 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-nb\") pod \"b391b287-fc4a-437e-b157-db5e86661249\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.396561 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9t6v\" (UniqueName: \"kubernetes.io/projected/b391b287-fc4a-437e-b157-db5e86661249-kube-api-access-l9t6v\") pod \"b391b287-fc4a-437e-b157-db5e86661249\" (UID: \"b391b287-fc4a-437e-b157-db5e86661249\") " Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.405298 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b391b287-fc4a-437e-b157-db5e86661249-kube-api-access-l9t6v" (OuterVolumeSpecName: "kube-api-access-l9t6v") pod "b391b287-fc4a-437e-b157-db5e86661249" (UID: "b391b287-fc4a-437e-b157-db5e86661249"). InnerVolumeSpecName "kube-api-access-l9t6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.449797 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-config" (OuterVolumeSpecName: "config") pod "b391b287-fc4a-437e-b157-db5e86661249" (UID: "b391b287-fc4a-437e-b157-db5e86661249"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.453244 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b391b287-fc4a-437e-b157-db5e86661249" (UID: "b391b287-fc4a-437e-b157-db5e86661249"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.453867 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b391b287-fc4a-437e-b157-db5e86661249" (UID: "b391b287-fc4a-437e-b157-db5e86661249"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.455041 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b391b287-fc4a-437e-b157-db5e86661249" (UID: "b391b287-fc4a-437e-b157-db5e86661249"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.499041 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.499072 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.499084 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9t6v\" (UniqueName: \"kubernetes.io/projected/b391b287-fc4a-437e-b157-db5e86661249-kube-api-access-l9t6v\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.499092 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.499103 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b391b287-fc4a-437e-b157-db5e86661249-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.730211 4996 generic.go:334] "Generic (PLEG): container finished" podID="b391b287-fc4a-437e-b157-db5e86661249" containerID="646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea" exitCode=0 Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.730258 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" event={"ID":"b391b287-fc4a-437e-b157-db5e86661249","Type":"ContainerDied","Data":"646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea"} Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.730340 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" event={"ID":"b391b287-fc4a-437e-b157-db5e86661249","Type":"ContainerDied","Data":"d8c974342105b02788494b71bd04c62717fbc0d54f1da8fae56707f1240c5c02"} Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.730362 4996 scope.go:117] "RemoveContainer" containerID="646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.730586 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-pm8t6" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.733077 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerStarted","Data":"c57b7db375eb4e78c95ada2f683b93b85f897a8681f18fed65a3b754d6dfd6fb"} Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.733252 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="ceilometer-central-agent" containerID="cri-o://b42de08d3c7b691214910732dfac414de45242b41c43723f5ea011b3fd82aabf" gracePeriod=30 Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.733473 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.733516 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="proxy-httpd" containerID="cri-o://c57b7db375eb4e78c95ada2f683b93b85f897a8681f18fed65a3b754d6dfd6fb" gracePeriod=30 Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.733559 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="sg-core" containerID="cri-o://5842d63cebc91cbb27f741512f53000627a43d51cfd33583e8e5adf3ef05e480" gracePeriod=30 Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.733593 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="ceilometer-notification-agent" containerID="cri-o://119a899d497bc6996943570b4bdc3b4fe1fb9c16b047258ef8a670e7ee859c07" gracePeriod=30 Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.760488 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.212144584 podStartE2EDuration="8.760466687s" podCreationTimestamp="2026-02-28 09:23:23 +0000 UTC" firstStartedPulling="2026-02-28 09:23:24.42510198 +0000 UTC m=+1368.115904791" lastFinishedPulling="2026-02-28 09:23:29.973424083 +0000 UTC m=+1373.664226894" observedRunningTime="2026-02-28 09:23:31.756677335 +0000 UTC m=+1375.447480156" watchObservedRunningTime="2026-02-28 09:23:31.760466687 +0000 UTC m=+1375.451269498" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.775571 4996 scope.go:117] "RemoveContainer" containerID="4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.785053 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-pm8t6"] Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.803674 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-pm8t6"] Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.806453 4996 scope.go:117] "RemoveContainer" containerID="646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea" Feb 28 09:23:31 crc kubenswrapper[4996]: E0228 09:23:31.808797 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea\": container with ID starting with 646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea not found: ID does not exist" containerID="646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.808837 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea"} err="failed to get container status \"646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea\": rpc error: code = NotFound desc = could not find container \"646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea\": container with ID starting with 646f1622f5218c1b874652e19aa0afad8089ea2e69942fee6d8d394baa87b6ea not found: ID does not exist" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.808864 4996 scope.go:117] "RemoveContainer" containerID="4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e" Feb 28 09:23:31 crc kubenswrapper[4996]: E0228 09:23:31.809372 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e\": container with ID starting with 4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e not found: ID does not exist" containerID="4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e" Feb 28 09:23:31 crc kubenswrapper[4996]: I0228 09:23:31.809395 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e"} err="failed to get container status \"4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e\": rpc error: code = NotFound desc = could not find container \"4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e\": container with ID starting with 4bd1a11adcf6f7e65b93c8c01e60d6abb1d4eb0993593130a3505bd88df6686e not found: ID does not exist" Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.750675 4996 generic.go:334] "Generic (PLEG): container finished" podID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerID="c57b7db375eb4e78c95ada2f683b93b85f897a8681f18fed65a3b754d6dfd6fb" exitCode=0 Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.750993 4996 generic.go:334] "Generic (PLEG): container finished" podID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerID="5842d63cebc91cbb27f741512f53000627a43d51cfd33583e8e5adf3ef05e480" exitCode=2 Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.751015 4996 generic.go:334] "Generic (PLEG): container finished" podID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerID="119a899d497bc6996943570b4bdc3b4fe1fb9c16b047258ef8a670e7ee859c07" exitCode=0 Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.751024 4996 generic.go:334] "Generic (PLEG): container finished" podID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerID="b42de08d3c7b691214910732dfac414de45242b41c43723f5ea011b3fd82aabf" exitCode=0 Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.750761 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerDied","Data":"c57b7db375eb4e78c95ada2f683b93b85f897a8681f18fed65a3b754d6dfd6fb"} Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.751103 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerDied","Data":"5842d63cebc91cbb27f741512f53000627a43d51cfd33583e8e5adf3ef05e480"} Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.751119 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerDied","Data":"119a899d497bc6996943570b4bdc3b4fe1fb9c16b047258ef8a670e7ee859c07"} Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.751132 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerDied","Data":"b42de08d3c7b691214910732dfac414de45242b41c43723f5ea011b3fd82aabf"} Feb 28 09:23:32 crc kubenswrapper[4996]: I0228 09:23:32.960231 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.033918 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-combined-ca-bundle\") pod \"a342b431-f1db-49c8-844f-f56cfc0914e1\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.033970 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-run-httpd\") pod \"a342b431-f1db-49c8-844f-f56cfc0914e1\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.034043 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-sg-core-conf-yaml\") pod \"a342b431-f1db-49c8-844f-f56cfc0914e1\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.034060 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-scripts\") pod \"a342b431-f1db-49c8-844f-f56cfc0914e1\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.034080 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-log-httpd\") pod \"a342b431-f1db-49c8-844f-f56cfc0914e1\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.034110 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-config-data\") pod \"a342b431-f1db-49c8-844f-f56cfc0914e1\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.034223 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd2wd\" (UniqueName: \"kubernetes.io/projected/a342b431-f1db-49c8-844f-f56cfc0914e1-kube-api-access-gd2wd\") pod \"a342b431-f1db-49c8-844f-f56cfc0914e1\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.034272 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-ceilometer-tls-certs\") pod \"a342b431-f1db-49c8-844f-f56cfc0914e1\" (UID: \"a342b431-f1db-49c8-844f-f56cfc0914e1\") " Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.034863 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a342b431-f1db-49c8-844f-f56cfc0914e1" (UID: "a342b431-f1db-49c8-844f-f56cfc0914e1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.036206 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a342b431-f1db-49c8-844f-f56cfc0914e1" (UID: "a342b431-f1db-49c8-844f-f56cfc0914e1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.041381 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-scripts" (OuterVolumeSpecName: "scripts") pod "a342b431-f1db-49c8-844f-f56cfc0914e1" (UID: "a342b431-f1db-49c8-844f-f56cfc0914e1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.041643 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a342b431-f1db-49c8-844f-f56cfc0914e1-kube-api-access-gd2wd" (OuterVolumeSpecName: "kube-api-access-gd2wd") pod "a342b431-f1db-49c8-844f-f56cfc0914e1" (UID: "a342b431-f1db-49c8-844f-f56cfc0914e1"). InnerVolumeSpecName "kube-api-access-gd2wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.049846 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b391b287-fc4a-437e-b157-db5e86661249" path="/var/lib/kubelet/pods/b391b287-fc4a-437e-b157-db5e86661249/volumes" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.066639 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a342b431-f1db-49c8-844f-f56cfc0914e1" (UID: "a342b431-f1db-49c8-844f-f56cfc0914e1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.098417 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a342b431-f1db-49c8-844f-f56cfc0914e1" (UID: "a342b431-f1db-49c8-844f-f56cfc0914e1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.107313 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a342b431-f1db-49c8-844f-f56cfc0914e1" (UID: "a342b431-f1db-49c8-844f-f56cfc0914e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.136648 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd2wd\" (UniqueName: \"kubernetes.io/projected/a342b431-f1db-49c8-844f-f56cfc0914e1-kube-api-access-gd2wd\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.136929 4996 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.137046 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.137141 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.137225 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.137305 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.137392 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a342b431-f1db-49c8-844f-f56cfc0914e1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.147851 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-config-data" (OuterVolumeSpecName: "config-data") pod "a342b431-f1db-49c8-844f-f56cfc0914e1" (UID: "a342b431-f1db-49c8-844f-f56cfc0914e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.239703 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a342b431-f1db-49c8-844f-f56cfc0914e1-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.766190 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a342b431-f1db-49c8-844f-f56cfc0914e1","Type":"ContainerDied","Data":"f446349934c77ed13a0b9a83117153eec486df17e8d2e778618e160ebc6bad6e"} Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.766535 4996 scope.go:117] "RemoveContainer" containerID="c57b7db375eb4e78c95ada2f683b93b85f897a8681f18fed65a3b754d6dfd6fb" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.766286 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.786828 4996 scope.go:117] "RemoveContainer" containerID="5842d63cebc91cbb27f741512f53000627a43d51cfd33583e8e5adf3ef05e480" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.813855 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.818111 4996 scope.go:117] "RemoveContainer" containerID="119a899d497bc6996943570b4bdc3b4fe1fb9c16b047258ef8a670e7ee859c07" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.825481 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.849771 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:33 crc kubenswrapper[4996]: E0228 09:23:33.850447 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="proxy-httpd" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.850480 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="proxy-httpd" Feb 28 09:23:33 crc kubenswrapper[4996]: E0228 09:23:33.850506 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="ceilometer-central-agent" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.850520 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="ceilometer-central-agent" Feb 28 09:23:33 crc kubenswrapper[4996]: E0228 09:23:33.850546 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b391b287-fc4a-437e-b157-db5e86661249" containerName="init" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.850560 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b391b287-fc4a-437e-b157-db5e86661249" containerName="init" Feb 28 09:23:33 crc kubenswrapper[4996]: E0228 09:23:33.850581 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b391b287-fc4a-437e-b157-db5e86661249" containerName="dnsmasq-dns" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.850593 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b391b287-fc4a-437e-b157-db5e86661249" containerName="dnsmasq-dns" Feb 28 09:23:33 crc kubenswrapper[4996]: E0228 09:23:33.850623 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="sg-core" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.850634 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="sg-core" Feb 28 09:23:33 crc kubenswrapper[4996]: E0228 09:23:33.850654 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="ceilometer-notification-agent" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.850668 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="ceilometer-notification-agent" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.850982 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="ceilometer-central-agent" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.851038 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="proxy-httpd" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.851056 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="sg-core" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.851070 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" containerName="ceilometer-notification-agent" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.851089 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b391b287-fc4a-437e-b157-db5e86661249" containerName="dnsmasq-dns" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.854126 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.861059 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.868122 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.868141 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.868355 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.888186 4996 scope.go:117] "RemoveContainer" containerID="b42de08d3c7b691214910732dfac414de45242b41c43723f5ea011b3fd82aabf" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.950682 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jg6x\" (UniqueName: \"kubernetes.io/projected/044cb29c-1d87-46f7-b2d1-3d82f880eceb-kube-api-access-7jg6x\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.950755 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-log-httpd\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.950827 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-scripts\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.950967 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-config-data\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.951036 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.951104 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-run-httpd\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.951166 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:33 crc kubenswrapper[4996]: I0228 09:23:33.951200 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.052997 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.053086 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.053122 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jg6x\" (UniqueName: \"kubernetes.io/projected/044cb29c-1d87-46f7-b2d1-3d82f880eceb-kube-api-access-7jg6x\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.053171 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-log-httpd\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.053638 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-scripts\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.053690 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-config-data\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.053753 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.054053 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-run-httpd\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.059764 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-run-httpd\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.059998 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-log-httpd\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.063134 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.075399 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.079632 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-config-data\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.081249 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jg6x\" (UniqueName: \"kubernetes.io/projected/044cb29c-1d87-46f7-b2d1-3d82f880eceb-kube-api-access-7jg6x\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.081693 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-scripts\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.084093 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.180862 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.640855 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:23:34 crc kubenswrapper[4996]: W0228 09:23:34.643994 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod044cb29c_1d87_46f7_b2d1_3d82f880eceb.slice/crio-5d2e5b2fc074bd2b7bec750ea45328ef711b4ef8baae511e34ea44905f29553a WatchSource:0}: Error finding container 5d2e5b2fc074bd2b7bec750ea45328ef711b4ef8baae511e34ea44905f29553a: Status 404 returned error can't find the container with id 5d2e5b2fc074bd2b7bec750ea45328ef711b4ef8baae511e34ea44905f29553a Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.778964 4996 generic.go:334] "Generic (PLEG): container finished" podID="2aca7df8-ff8f-457d-b65b-fba7e0eed249" containerID="fd159c64ce4b593bdda85f84be1e153076f047d334a404cb0343cdf5a6a1446c" exitCode=0 Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.779037 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xwlrs" event={"ID":"2aca7df8-ff8f-457d-b65b-fba7e0eed249","Type":"ContainerDied","Data":"fd159c64ce4b593bdda85f84be1e153076f047d334a404cb0343cdf5a6a1446c"} Feb 28 09:23:34 crc kubenswrapper[4996]: I0228 09:23:34.781850 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerStarted","Data":"5d2e5b2fc074bd2b7bec750ea45328ef711b4ef8baae511e34ea44905f29553a"} Feb 28 09:23:35 crc kubenswrapper[4996]: I0228 09:23:35.044345 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a342b431-f1db-49c8-844f-f56cfc0914e1" path="/var/lib/kubelet/pods/a342b431-f1db-49c8-844f-f56cfc0914e1/volumes" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.167798 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.193390 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-scripts\") pod \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.193476 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-combined-ca-bundle\") pod \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.193602 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smrmr\" (UniqueName: \"kubernetes.io/projected/2aca7df8-ff8f-457d-b65b-fba7e0eed249-kube-api-access-smrmr\") pod \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.193623 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-config-data\") pod \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\" (UID: \"2aca7df8-ff8f-457d-b65b-fba7e0eed249\") " Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.200116 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-scripts" (OuterVolumeSpecName: "scripts") pod "2aca7df8-ff8f-457d-b65b-fba7e0eed249" (UID: "2aca7df8-ff8f-457d-b65b-fba7e0eed249"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.207619 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aca7df8-ff8f-457d-b65b-fba7e0eed249-kube-api-access-smrmr" (OuterVolumeSpecName: "kube-api-access-smrmr") pod "2aca7df8-ff8f-457d-b65b-fba7e0eed249" (UID: "2aca7df8-ff8f-457d-b65b-fba7e0eed249"). InnerVolumeSpecName "kube-api-access-smrmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.223599 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-config-data" (OuterVolumeSpecName: "config-data") pod "2aca7df8-ff8f-457d-b65b-fba7e0eed249" (UID: "2aca7df8-ff8f-457d-b65b-fba7e0eed249"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.225187 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2aca7df8-ff8f-457d-b65b-fba7e0eed249" (UID: "2aca7df8-ff8f-457d-b65b-fba7e0eed249"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.254045 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.257787 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.260161 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.295497 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smrmr\" (UniqueName: \"kubernetes.io/projected/2aca7df8-ff8f-457d-b65b-fba7e0eed249-kube-api-access-smrmr\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.295518 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.295528 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.295537 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2aca7df8-ff8f-457d-b65b-fba7e0eed249-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.806974 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xwlrs" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.806967 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xwlrs" event={"ID":"2aca7df8-ff8f-457d-b65b-fba7e0eed249","Type":"ContainerDied","Data":"15f0d363dddd281c686b2db48f4bf244cd3f2d76ea1154e5f3891d3a905489b2"} Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.807606 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f0d363dddd281c686b2db48f4bf244cd3f2d76ea1154e5f3891d3a905489b2" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.829745 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerStarted","Data":"9936ab31f30a3a2bf50f424c98e0016438478ad28eeb33978444299b763526c8"} Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.835790 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.987163 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.987364 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" containerName="nova-api-log" containerID="cri-o://38665342b2e01698d06896cb009a3d6b7f829132b519683dde926d33493bd6e1" gracePeriod=30 Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:36.987747 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" containerName="nova-api-api" containerID="cri-o://d85e234bc0f7fe77d2f9e422922a93c93bff3aed8a26db896c99d433f72dc283" gracePeriod=30 Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.056342 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.056583 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b49a28a3-c5c5-464f-9438-d1756138dfe1" containerName="nova-scheduler-scheduler" containerID="cri-o://44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c" gracePeriod=30 Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.113387 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.840914 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerStarted","Data":"a78f257ae7e56d19ced48008d4c57e27d72c4b8f149524e6937626ee0814da34"} Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.842678 4996 generic.go:334] "Generic (PLEG): container finished" podID="64529d7f-92ea-4fcd-8293-165f79009b53" containerID="d85e234bc0f7fe77d2f9e422922a93c93bff3aed8a26db896c99d433f72dc283" exitCode=0 Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.842708 4996 generic.go:334] "Generic (PLEG): container finished" podID="64529d7f-92ea-4fcd-8293-165f79009b53" containerID="38665342b2e01698d06896cb009a3d6b7f829132b519683dde926d33493bd6e1" exitCode=143 Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.842760 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64529d7f-92ea-4fcd-8293-165f79009b53","Type":"ContainerDied","Data":"d85e234bc0f7fe77d2f9e422922a93c93bff3aed8a26db896c99d433f72dc283"} Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.842792 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64529d7f-92ea-4fcd-8293-165f79009b53","Type":"ContainerDied","Data":"38665342b2e01698d06896cb009a3d6b7f829132b519683dde926d33493bd6e1"} Feb 28 09:23:37 crc kubenswrapper[4996]: I0228 09:23:37.994507 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.136900 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-public-tls-certs\") pod \"64529d7f-92ea-4fcd-8293-165f79009b53\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.137314 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64529d7f-92ea-4fcd-8293-165f79009b53-logs\") pod \"64529d7f-92ea-4fcd-8293-165f79009b53\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.137347 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-config-data\") pod \"64529d7f-92ea-4fcd-8293-165f79009b53\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.137373 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-combined-ca-bundle\") pod \"64529d7f-92ea-4fcd-8293-165f79009b53\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.137438 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-internal-tls-certs\") pod \"64529d7f-92ea-4fcd-8293-165f79009b53\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.137487 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmqzs\" (UniqueName: \"kubernetes.io/projected/64529d7f-92ea-4fcd-8293-165f79009b53-kube-api-access-nmqzs\") pod \"64529d7f-92ea-4fcd-8293-165f79009b53\" (UID: \"64529d7f-92ea-4fcd-8293-165f79009b53\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.140405 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64529d7f-92ea-4fcd-8293-165f79009b53-logs" (OuterVolumeSpecName: "logs") pod "64529d7f-92ea-4fcd-8293-165f79009b53" (UID: "64529d7f-92ea-4fcd-8293-165f79009b53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.146223 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64529d7f-92ea-4fcd-8293-165f79009b53-kube-api-access-nmqzs" (OuterVolumeSpecName: "kube-api-access-nmqzs") pod "64529d7f-92ea-4fcd-8293-165f79009b53" (UID: "64529d7f-92ea-4fcd-8293-165f79009b53"). InnerVolumeSpecName "kube-api-access-nmqzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.173143 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-config-data" (OuterVolumeSpecName: "config-data") pod "64529d7f-92ea-4fcd-8293-165f79009b53" (UID: "64529d7f-92ea-4fcd-8293-165f79009b53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.178124 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64529d7f-92ea-4fcd-8293-165f79009b53" (UID: "64529d7f-92ea-4fcd-8293-165f79009b53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.215155 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "64529d7f-92ea-4fcd-8293-165f79009b53" (UID: "64529d7f-92ea-4fcd-8293-165f79009b53"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.215628 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "64529d7f-92ea-4fcd-8293-165f79009b53" (UID: "64529d7f-92ea-4fcd-8293-165f79009b53"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.240329 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64529d7f-92ea-4fcd-8293-165f79009b53-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.240384 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.240397 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.240414 4996 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.240428 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmqzs\" (UniqueName: \"kubernetes.io/projected/64529d7f-92ea-4fcd-8293-165f79009b53-kube-api-access-nmqzs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.240439 4996 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/64529d7f-92ea-4fcd-8293-165f79009b53-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.609234 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.748406 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-config-data\") pod \"b49a28a3-c5c5-464f-9438-d1756138dfe1\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.748469 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4xjs\" (UniqueName: \"kubernetes.io/projected/b49a28a3-c5c5-464f-9438-d1756138dfe1-kube-api-access-t4xjs\") pod \"b49a28a3-c5c5-464f-9438-d1756138dfe1\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.748647 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-combined-ca-bundle\") pod \"b49a28a3-c5c5-464f-9438-d1756138dfe1\" (UID: \"b49a28a3-c5c5-464f-9438-d1756138dfe1\") " Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.757283 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49a28a3-c5c5-464f-9438-d1756138dfe1-kube-api-access-t4xjs" (OuterVolumeSpecName: "kube-api-access-t4xjs") pod "b49a28a3-c5c5-464f-9438-d1756138dfe1" (UID: "b49a28a3-c5c5-464f-9438-d1756138dfe1"). InnerVolumeSpecName "kube-api-access-t4xjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.781872 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b49a28a3-c5c5-464f-9438-d1756138dfe1" (UID: "b49a28a3-c5c5-464f-9438-d1756138dfe1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.792149 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-config-data" (OuterVolumeSpecName: "config-data") pod "b49a28a3-c5c5-464f-9438-d1756138dfe1" (UID: "b49a28a3-c5c5-464f-9438-d1756138dfe1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.850914 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.851229 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b49a28a3-c5c5-464f-9438-d1756138dfe1-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.851241 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4xjs\" (UniqueName: \"kubernetes.io/projected/b49a28a3-c5c5-464f-9438-d1756138dfe1-kube-api-access-t4xjs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.853404 4996 generic.go:334] "Generic (PLEG): container finished" podID="b49a28a3-c5c5-464f-9438-d1756138dfe1" containerID="44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c" exitCode=0 Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.853469 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.853495 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b49a28a3-c5c5-464f-9438-d1756138dfe1","Type":"ContainerDied","Data":"44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c"} Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.853540 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b49a28a3-c5c5-464f-9438-d1756138dfe1","Type":"ContainerDied","Data":"9240467e1a4a05f402be5ed0e6e3b08e266fb5228a9bcf81c0c983f658f5a921"} Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.853557 4996 scope.go:117] "RemoveContainer" containerID="44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.856519 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"64529d7f-92ea-4fcd-8293-165f79009b53","Type":"ContainerDied","Data":"009dc52316b5e9b0bc196fb22d3be49ca4ec102c7b6fc9c51417cba56ed08602"} Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.856624 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.859953 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerStarted","Data":"78450d3cb1a0c958e1eef4c7ba69a5a74b429f76fdbd7177eb3a5a8af1eb9040"} Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.860081 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-log" containerID="cri-o://a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66" gracePeriod=30 Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.860140 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-metadata" containerID="cri-o://c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4" gracePeriod=30 Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.898919 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.907100 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.907945 4996 scope.go:117] "RemoveContainer" containerID="44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c" Feb 28 09:23:38 crc kubenswrapper[4996]: E0228 09:23:38.909451 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c\": container with ID starting with 44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c not found: ID does not exist" containerID="44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.909493 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c"} err="failed to get container status \"44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c\": rpc error: code = NotFound desc = could not find container \"44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c\": container with ID starting with 44c1b9786be0686c4b6b312f935448ea4c8479effd8cc0e24a1b7c22dcfcbd3c not found: ID does not exist" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.909519 4996 scope.go:117] "RemoveContainer" containerID="d85e234bc0f7fe77d2f9e422922a93c93bff3aed8a26db896c99d433f72dc283" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.915412 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:23:38 crc kubenswrapper[4996]: E0228 09:23:38.915804 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" containerName="nova-api-api" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.915827 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" containerName="nova-api-api" Feb 28 09:23:38 crc kubenswrapper[4996]: E0228 09:23:38.915836 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aca7df8-ff8f-457d-b65b-fba7e0eed249" containerName="nova-manage" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.915844 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aca7df8-ff8f-457d-b65b-fba7e0eed249" containerName="nova-manage" Feb 28 09:23:38 crc kubenswrapper[4996]: E0228 09:23:38.915866 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" containerName="nova-api-log" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.915873 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" containerName="nova-api-log" Feb 28 09:23:38 crc kubenswrapper[4996]: E0228 09:23:38.915893 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49a28a3-c5c5-464f-9438-d1756138dfe1" containerName="nova-scheduler-scheduler" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.915898 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49a28a3-c5c5-464f-9438-d1756138dfe1" containerName="nova-scheduler-scheduler" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.916072 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49a28a3-c5c5-464f-9438-d1756138dfe1" containerName="nova-scheduler-scheduler" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.916100 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aca7df8-ff8f-457d-b65b-fba7e0eed249" containerName="nova-manage" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.916123 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" containerName="nova-api-api" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.916137 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" containerName="nova-api-log" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.916742 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.923709 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.926590 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.931853 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.949179 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.962690 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.964188 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.967612 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.967957 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.968567 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.968761 4996 scope.go:117] "RemoveContainer" containerID="38665342b2e01698d06896cb009a3d6b7f829132b519683dde926d33493bd6e1" Feb 28 09:23:38 crc kubenswrapper[4996]: I0228 09:23:38.975391 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.043590 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64529d7f-92ea-4fcd-8293-165f79009b53" path="/var/lib/kubelet/pods/64529d7f-92ea-4fcd-8293-165f79009b53/volumes" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.044517 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49a28a3-c5c5-464f-9438-d1756138dfe1" path="/var/lib/kubelet/pods/b49a28a3-c5c5-464f-9438-d1756138dfe1/volumes" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.055981 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/594a5261-8810-4189-9140-39d0fc645c6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.056132 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/594a5261-8810-4189-9140-39d0fc645c6e-config-data\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.056155 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnstn\" (UniqueName: \"kubernetes.io/projected/594a5261-8810-4189-9140-39d0fc645c6e-kube-api-access-gnstn\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158151 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-public-tls-certs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158224 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07191c7b-ef05-4fca-ab52-6df77fc1b92a-logs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158295 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-config-data\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158375 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/594a5261-8810-4189-9140-39d0fc645c6e-config-data\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158416 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnstn\" (UniqueName: \"kubernetes.io/projected/594a5261-8810-4189-9140-39d0fc645c6e-kube-api-access-gnstn\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158454 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158539 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158639 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxkjc\" (UniqueName: \"kubernetes.io/projected/07191c7b-ef05-4fca-ab52-6df77fc1b92a-kube-api-access-rxkjc\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.158708 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/594a5261-8810-4189-9140-39d0fc645c6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.165766 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/594a5261-8810-4189-9140-39d0fc645c6e-config-data\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.166216 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/594a5261-8810-4189-9140-39d0fc645c6e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.179487 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnstn\" (UniqueName: \"kubernetes.io/projected/594a5261-8810-4189-9140-39d0fc645c6e-kube-api-access-gnstn\") pod \"nova-scheduler-0\" (UID: \"594a5261-8810-4189-9140-39d0fc645c6e\") " pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.242892 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.260566 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.260971 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.261120 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxkjc\" (UniqueName: \"kubernetes.io/projected/07191c7b-ef05-4fca-ab52-6df77fc1b92a-kube-api-access-rxkjc\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.261197 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-public-tls-certs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.261222 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07191c7b-ef05-4fca-ab52-6df77fc1b92a-logs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.261241 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-config-data\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.262076 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07191c7b-ef05-4fca-ab52-6df77fc1b92a-logs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.264189 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.264795 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.266798 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-config-data\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.267159 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07191c7b-ef05-4fca-ab52-6df77fc1b92a-public-tls-certs\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.294252 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxkjc\" (UniqueName: \"kubernetes.io/projected/07191c7b-ef05-4fca-ab52-6df77fc1b92a-kube-api-access-rxkjc\") pod \"nova-api-0\" (UID: \"07191c7b-ef05-4fca-ab52-6df77fc1b92a\") " pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.337151 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.700657 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 09:23:39 crc kubenswrapper[4996]: W0228 09:23:39.703140 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod594a5261_8810_4189_9140_39d0fc645c6e.slice/crio-5c5955b4679adfa0c0a09662b194f835f21b29c2372d305ffd3dbde35c441c74 WatchSource:0}: Error finding container 5c5955b4679adfa0c0a09662b194f835f21b29c2372d305ffd3dbde35c441c74: Status 404 returned error can't find the container with id 5c5955b4679adfa0c0a09662b194f835f21b29c2372d305ffd3dbde35c441c74 Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.797579 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.888661 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"594a5261-8810-4189-9140-39d0fc645c6e","Type":"ContainerStarted","Data":"5c5955b4679adfa0c0a09662b194f835f21b29c2372d305ffd3dbde35c441c74"} Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.890135 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07191c7b-ef05-4fca-ab52-6df77fc1b92a","Type":"ContainerStarted","Data":"32c1aab62e858191aefc8fa37c9461b43a085ab7ffab2494324b40c8b874df52"} Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.894255 4996 generic.go:334] "Generic (PLEG): container finished" podID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerID="a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66" exitCode=143 Feb 28 09:23:39 crc kubenswrapper[4996]: I0228 09:23:39.894315 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b9a34f8-7f3c-41ff-896c-328fe217c902","Type":"ContainerDied","Data":"a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66"} Feb 28 09:23:40 crc kubenswrapper[4996]: I0228 09:23:40.910678 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerStarted","Data":"65ddfaf80b45c7cdf6301b93729987e5149f963eb3997a48aca7ee97bae411dd"} Feb 28 09:23:40 crc kubenswrapper[4996]: I0228 09:23:40.912372 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:23:40 crc kubenswrapper[4996]: I0228 09:23:40.915469 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07191c7b-ef05-4fca-ab52-6df77fc1b92a","Type":"ContainerStarted","Data":"15180713dc507f68e422e2c61632727e393191e40fffc7c5e952a707dadfd07b"} Feb 28 09:23:40 crc kubenswrapper[4996]: I0228 09:23:40.915523 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"07191c7b-ef05-4fca-ab52-6df77fc1b92a","Type":"ContainerStarted","Data":"656d85af5f010b613634777d0a29d374f404ef61811c537a95cbe320893c6d94"} Feb 28 09:23:40 crc kubenswrapper[4996]: I0228 09:23:40.917260 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"594a5261-8810-4189-9140-39d0fc645c6e","Type":"ContainerStarted","Data":"91ae0d02236471d51d842f2d17f282180a7c2ae1275d2aac26b987331af3d0b3"} Feb 28 09:23:40 crc kubenswrapper[4996]: I0228 09:23:40.949153 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.514621091 podStartE2EDuration="7.94912728s" podCreationTimestamp="2026-02-28 09:23:33 +0000 UTC" firstStartedPulling="2026-02-28 09:23:34.64733071 +0000 UTC m=+1378.338133531" lastFinishedPulling="2026-02-28 09:23:40.081836909 +0000 UTC m=+1383.772639720" observedRunningTime="2026-02-28 09:23:40.932115394 +0000 UTC m=+1384.622918225" watchObservedRunningTime="2026-02-28 09:23:40.94912728 +0000 UTC m=+1384.639930091" Feb 28 09:23:40 crc kubenswrapper[4996]: I0228 09:23:40.980641 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.98061598 podStartE2EDuration="2.98061598s" podCreationTimestamp="2026-02-28 09:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:40.959126504 +0000 UTC m=+1384.649929315" watchObservedRunningTime="2026-02-28 09:23:40.98061598 +0000 UTC m=+1384.671418791" Feb 28 09:23:40 crc kubenswrapper[4996]: I0228 09:23:40.986860 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.986792351 podStartE2EDuration="2.986792351s" podCreationTimestamp="2026-02-28 09:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:40.977065753 +0000 UTC m=+1384.667868584" watchObservedRunningTime="2026-02-28 09:23:40.986792351 +0000 UTC m=+1384.677595162" Feb 28 09:23:41 crc kubenswrapper[4996]: I0228 09:23:41.994195 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:49016->10.217.0.196:8775: read: connection reset by peer" Feb 28 09:23:41 crc kubenswrapper[4996]: I0228 09:23:41.994277 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:49028->10.217.0.196:8775: read: connection reset by peer" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.249447 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.249723 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.444828 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.528687 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9a34f8-7f3c-41ff-896c-328fe217c902-logs\") pod \"7b9a34f8-7f3c-41ff-896c-328fe217c902\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.528784 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5plg\" (UniqueName: \"kubernetes.io/projected/7b9a34f8-7f3c-41ff-896c-328fe217c902-kube-api-access-v5plg\") pod \"7b9a34f8-7f3c-41ff-896c-328fe217c902\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.528913 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-combined-ca-bundle\") pod \"7b9a34f8-7f3c-41ff-896c-328fe217c902\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.529149 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-config-data\") pod \"7b9a34f8-7f3c-41ff-896c-328fe217c902\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.529225 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-nova-metadata-tls-certs\") pod \"7b9a34f8-7f3c-41ff-896c-328fe217c902\" (UID: \"7b9a34f8-7f3c-41ff-896c-328fe217c902\") " Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.529280 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9a34f8-7f3c-41ff-896c-328fe217c902-logs" (OuterVolumeSpecName: "logs") pod "7b9a34f8-7f3c-41ff-896c-328fe217c902" (UID: "7b9a34f8-7f3c-41ff-896c-328fe217c902"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.530385 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b9a34f8-7f3c-41ff-896c-328fe217c902-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.548588 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9a34f8-7f3c-41ff-896c-328fe217c902-kube-api-access-v5plg" (OuterVolumeSpecName: "kube-api-access-v5plg") pod "7b9a34f8-7f3c-41ff-896c-328fe217c902" (UID: "7b9a34f8-7f3c-41ff-896c-328fe217c902"). InnerVolumeSpecName "kube-api-access-v5plg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.589414 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-config-data" (OuterVolumeSpecName: "config-data") pod "7b9a34f8-7f3c-41ff-896c-328fe217c902" (UID: "7b9a34f8-7f3c-41ff-896c-328fe217c902"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.591186 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b9a34f8-7f3c-41ff-896c-328fe217c902" (UID: "7b9a34f8-7f3c-41ff-896c-328fe217c902"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.602108 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7b9a34f8-7f3c-41ff-896c-328fe217c902" (UID: "7b9a34f8-7f3c-41ff-896c-328fe217c902"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.631822 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.631859 4996 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.631869 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5plg\" (UniqueName: \"kubernetes.io/projected/7b9a34f8-7f3c-41ff-896c-328fe217c902-kube-api-access-v5plg\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.631877 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9a34f8-7f3c-41ff-896c-328fe217c902-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.953194 4996 generic.go:334] "Generic (PLEG): container finished" podID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerID="c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4" exitCode=0 Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.953351 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.953417 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b9a34f8-7f3c-41ff-896c-328fe217c902","Type":"ContainerDied","Data":"c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4"} Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.953467 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b9a34f8-7f3c-41ff-896c-328fe217c902","Type":"ContainerDied","Data":"1a569e66dbd966973741742883c63420d5039a16edbde4943d46fe176be75d2d"} Feb 28 09:23:42 crc kubenswrapper[4996]: I0228 09:23:42.953492 4996 scope.go:117] "RemoveContainer" containerID="c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.004790 4996 scope.go:117] "RemoveContainer" containerID="a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.015168 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.027577 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.046083 4996 scope.go:117] "RemoveContainer" containerID="c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4" Feb 28 09:23:43 crc kubenswrapper[4996]: E0228 09:23:43.047230 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4\": container with ID starting with c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4 not found: ID does not exist" containerID="c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.047273 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4"} err="failed to get container status \"c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4\": rpc error: code = NotFound desc = could not find container \"c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4\": container with ID starting with c3e29f2307895d1ec9ec58c81dd9a46636823bc0e9f2105bdf8496ddb7a539d4 not found: ID does not exist" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.047299 4996 scope.go:117] "RemoveContainer" containerID="a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66" Feb 28 09:23:43 crc kubenswrapper[4996]: E0228 09:23:43.047573 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66\": container with ID starting with a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66 not found: ID does not exist" containerID="a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.047595 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66"} err="failed to get container status \"a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66\": rpc error: code = NotFound desc = could not find container \"a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66\": container with ID starting with a67e803025d2a8fd049745e46c8048d568e6164352301a30aae035ce9fe93c66 not found: ID does not exist" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.065122 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" path="/var/lib/kubelet/pods/7b9a34f8-7f3c-41ff-896c-328fe217c902/volumes" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.065967 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:43 crc kubenswrapper[4996]: E0228 09:23:43.066391 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-metadata" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.066465 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-metadata" Feb 28 09:23:43 crc kubenswrapper[4996]: E0228 09:23:43.066561 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-log" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.066627 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-log" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.066919 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-log" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.066996 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9a34f8-7f3c-41ff-896c-328fe217c902" containerName="nova-metadata-metadata" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.068281 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.068431 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.072738 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.072928 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.258106 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.258203 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb16fcc-5ac7-437e-bca5-e82873599fac-logs\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.258340 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-config-data\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.258629 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjjl2\" (UniqueName: \"kubernetes.io/projected/3eb16fcc-5ac7-437e-bca5-e82873599fac-kube-api-access-fjjl2\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.258852 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.360992 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.361155 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.361285 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb16fcc-5ac7-437e-bca5-e82873599fac-logs\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.361325 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-config-data\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.361538 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjjl2\" (UniqueName: \"kubernetes.io/projected/3eb16fcc-5ac7-437e-bca5-e82873599fac-kube-api-access-fjjl2\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.361783 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3eb16fcc-5ac7-437e-bca5-e82873599fac-logs\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.365666 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.366386 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.378878 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3eb16fcc-5ac7-437e-bca5-e82873599fac-config-data\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.390922 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjjl2\" (UniqueName: \"kubernetes.io/projected/3eb16fcc-5ac7-437e-bca5-e82873599fac-kube-api-access-fjjl2\") pod \"nova-metadata-0\" (UID: \"3eb16fcc-5ac7-437e-bca5-e82873599fac\") " pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.423547 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.925133 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 09:23:43 crc kubenswrapper[4996]: I0228 09:23:43.965407 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3eb16fcc-5ac7-437e-bca5-e82873599fac","Type":"ContainerStarted","Data":"a492a3e62d00cbdee73a40f385c410e60cf93481340f5d6aad1469a98aaa0274"} Feb 28 09:23:44 crc kubenswrapper[4996]: I0228 09:23:44.243839 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 09:23:44 crc kubenswrapper[4996]: I0228 09:23:44.983362 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3eb16fcc-5ac7-437e-bca5-e82873599fac","Type":"ContainerStarted","Data":"08359a2421da9ff0ada189918be3dd616601a638f620e63696fd3635358e65c0"} Feb 28 09:23:44 crc kubenswrapper[4996]: I0228 09:23:44.983810 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3eb16fcc-5ac7-437e-bca5-e82873599fac","Type":"ContainerStarted","Data":"21a0c6c713393e986685d936f11aeae1c65bc897b120552db66bb0eb900a0f3f"} Feb 28 09:23:48 crc kubenswrapper[4996]: I0228 09:23:48.424667 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 09:23:48 crc kubenswrapper[4996]: I0228 09:23:48.424943 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 09:23:49 crc kubenswrapper[4996]: I0228 09:23:49.243578 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 09:23:49 crc kubenswrapper[4996]: I0228 09:23:49.284157 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 09:23:49 crc kubenswrapper[4996]: I0228 09:23:49.310728 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=7.310710745 podStartE2EDuration="7.310710745s" podCreationTimestamp="2026-02-28 09:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:23:45.018396709 +0000 UTC m=+1388.709199550" watchObservedRunningTime="2026-02-28 09:23:49.310710745 +0000 UTC m=+1393.001513556" Feb 28 09:23:49 crc kubenswrapper[4996]: I0228 09:23:49.338208 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 09:23:49 crc kubenswrapper[4996]: I0228 09:23:49.338297 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 09:23:50 crc kubenswrapper[4996]: I0228 09:23:50.091963 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 09:23:50 crc kubenswrapper[4996]: I0228 09:23:50.354373 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="07191c7b-ef05-4fca-ab52-6df77fc1b92a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 09:23:50 crc kubenswrapper[4996]: I0228 09:23:50.354320 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="07191c7b-ef05-4fca-ab52-6df77fc1b92a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 09:23:53 crc kubenswrapper[4996]: I0228 09:23:53.425328 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 09:23:53 crc kubenswrapper[4996]: I0228 09:23:53.425604 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 09:23:54 crc kubenswrapper[4996]: I0228 09:23:54.441239 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3eb16fcc-5ac7-437e-bca5-e82873599fac" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 09:23:54 crc kubenswrapper[4996]: I0228 09:23:54.441434 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3eb16fcc-5ac7-437e-bca5-e82873599fac" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.204:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 09:23:59 crc kubenswrapper[4996]: I0228 09:23:59.345067 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 09:23:59 crc kubenswrapper[4996]: I0228 09:23:59.346219 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 09:23:59 crc kubenswrapper[4996]: I0228 09:23:59.348310 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 09:23:59 crc kubenswrapper[4996]: I0228 09:23:59.352533 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.139759 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.154180 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.158178 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537844-nc5cg"] Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.160781 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.163104 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.163553 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.163874 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.172239 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537844-nc5cg"] Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.262399 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt22k\" (UniqueName: \"kubernetes.io/projected/6d715edd-156b-434f-8c63-0f6ef5314659-kube-api-access-pt22k\") pod \"auto-csr-approver-29537844-nc5cg\" (UID: \"6d715edd-156b-434f-8c63-0f6ef5314659\") " pod="openshift-infra/auto-csr-approver-29537844-nc5cg" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.364226 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt22k\" (UniqueName: \"kubernetes.io/projected/6d715edd-156b-434f-8c63-0f6ef5314659-kube-api-access-pt22k\") pod \"auto-csr-approver-29537844-nc5cg\" (UID: \"6d715edd-156b-434f-8c63-0f6ef5314659\") " pod="openshift-infra/auto-csr-approver-29537844-nc5cg" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.385684 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt22k\" (UniqueName: \"kubernetes.io/projected/6d715edd-156b-434f-8c63-0f6ef5314659-kube-api-access-pt22k\") pod \"auto-csr-approver-29537844-nc5cg\" (UID: \"6d715edd-156b-434f-8c63-0f6ef5314659\") " pod="openshift-infra/auto-csr-approver-29537844-nc5cg" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.479041 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" Feb 28 09:24:00 crc kubenswrapper[4996]: I0228 09:24:00.926438 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537844-nc5cg"] Feb 28 09:24:01 crc kubenswrapper[4996]: I0228 09:24:01.149087 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" event={"ID":"6d715edd-156b-434f-8c63-0f6ef5314659","Type":"ContainerStarted","Data":"39e405bc0b02912b0f69080a32912c333b9ce04986a6f845989618f0e0475e71"} Feb 28 09:24:02 crc kubenswrapper[4996]: I0228 09:24:02.160641 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" event={"ID":"6d715edd-156b-434f-8c63-0f6ef5314659","Type":"ContainerStarted","Data":"b49a7caf8a70f9cdd3f8024b64460594befd85a7d744cd3f1e5a9b17f6e7bab0"} Feb 28 09:24:02 crc kubenswrapper[4996]: I0228 09:24:02.185804 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" podStartSLOduration=1.362486451 podStartE2EDuration="2.185781155s" podCreationTimestamp="2026-02-28 09:24:00 +0000 UTC" firstStartedPulling="2026-02-28 09:24:00.933191521 +0000 UTC m=+1404.623994332" lastFinishedPulling="2026-02-28 09:24:01.756486215 +0000 UTC m=+1405.447289036" observedRunningTime="2026-02-28 09:24:02.176262492 +0000 UTC m=+1405.867065303" watchObservedRunningTime="2026-02-28 09:24:02.185781155 +0000 UTC m=+1405.876583966" Feb 28 09:24:03 crc kubenswrapper[4996]: I0228 09:24:03.173905 4996 generic.go:334] "Generic (PLEG): container finished" podID="6d715edd-156b-434f-8c63-0f6ef5314659" containerID="b49a7caf8a70f9cdd3f8024b64460594befd85a7d744cd3f1e5a9b17f6e7bab0" exitCode=0 Feb 28 09:24:03 crc kubenswrapper[4996]: I0228 09:24:03.173982 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" event={"ID":"6d715edd-156b-434f-8c63-0f6ef5314659","Type":"ContainerDied","Data":"b49a7caf8a70f9cdd3f8024b64460594befd85a7d744cd3f1e5a9b17f6e7bab0"} Feb 28 09:24:03 crc kubenswrapper[4996]: I0228 09:24:03.434991 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 09:24:03 crc kubenswrapper[4996]: I0228 09:24:03.436164 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 09:24:03 crc kubenswrapper[4996]: I0228 09:24:03.446231 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 09:24:04 crc kubenswrapper[4996]: I0228 09:24:04.192358 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 09:24:04 crc kubenswrapper[4996]: I0228 09:24:04.196823 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 09:24:04 crc kubenswrapper[4996]: I0228 09:24:04.584883 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" Feb 28 09:24:04 crc kubenswrapper[4996]: I0228 09:24:04.758500 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt22k\" (UniqueName: \"kubernetes.io/projected/6d715edd-156b-434f-8c63-0f6ef5314659-kube-api-access-pt22k\") pod \"6d715edd-156b-434f-8c63-0f6ef5314659\" (UID: \"6d715edd-156b-434f-8c63-0f6ef5314659\") " Feb 28 09:24:04 crc kubenswrapper[4996]: I0228 09:24:04.770594 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d715edd-156b-434f-8c63-0f6ef5314659-kube-api-access-pt22k" (OuterVolumeSpecName: "kube-api-access-pt22k") pod "6d715edd-156b-434f-8c63-0f6ef5314659" (UID: "6d715edd-156b-434f-8c63-0f6ef5314659"). InnerVolumeSpecName "kube-api-access-pt22k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:04 crc kubenswrapper[4996]: I0228 09:24:04.861090 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt22k\" (UniqueName: \"kubernetes.io/projected/6d715edd-156b-434f-8c63-0f6ef5314659-kube-api-access-pt22k\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:05 crc kubenswrapper[4996]: I0228 09:24:05.193550 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" Feb 28 09:24:05 crc kubenswrapper[4996]: I0228 09:24:05.193705 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537844-nc5cg" event={"ID":"6d715edd-156b-434f-8c63-0f6ef5314659","Type":"ContainerDied","Data":"39e405bc0b02912b0f69080a32912c333b9ce04986a6f845989618f0e0475e71"} Feb 28 09:24:05 crc kubenswrapper[4996]: I0228 09:24:05.193741 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39e405bc0b02912b0f69080a32912c333b9ce04986a6f845989618f0e0475e71" Feb 28 09:24:05 crc kubenswrapper[4996]: I0228 09:24:05.245325 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537838-n88jq"] Feb 28 09:24:05 crc kubenswrapper[4996]: I0228 09:24:05.253069 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537838-n88jq"] Feb 28 09:24:07 crc kubenswrapper[4996]: I0228 09:24:07.055712 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23adee06-6959-459e-9756-94f5f491682c" path="/var/lib/kubelet/pods/23adee06-6959-459e-9756-94f5f491682c/volumes" Feb 28 09:24:12 crc kubenswrapper[4996]: I0228 09:24:12.249324 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:24:12 crc kubenswrapper[4996]: I0228 09:24:12.249826 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:24:12 crc kubenswrapper[4996]: I0228 09:24:12.249884 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:24:12 crc kubenswrapper[4996]: I0228 09:24:12.250749 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d74e0665ce63a7b1e3ccb10e05382d63c764d169c6c0125d3275a4454729a94"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:24:12 crc kubenswrapper[4996]: I0228 09:24:12.250819 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://0d74e0665ce63a7b1e3ccb10e05382d63c764d169c6c0125d3275a4454729a94" gracePeriod=600 Feb 28 09:24:12 crc kubenswrapper[4996]: I0228 09:24:12.345084 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:24:13 crc kubenswrapper[4996]: I0228 09:24:13.281098 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="0d74e0665ce63a7b1e3ccb10e05382d63c764d169c6c0125d3275a4454729a94" exitCode=0 Feb 28 09:24:13 crc kubenswrapper[4996]: I0228 09:24:13.281145 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"0d74e0665ce63a7b1e3ccb10e05382d63c764d169c6c0125d3275a4454729a94"} Feb 28 09:24:13 crc kubenswrapper[4996]: I0228 09:24:13.281541 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e"} Feb 28 09:24:13 crc kubenswrapper[4996]: I0228 09:24:13.281581 4996 scope.go:117] "RemoveContainer" containerID="4772ad3990edf9d0c6d563de92a45c70bf5a82075c0fa4fd5de03b133e39b174" Feb 28 09:24:14 crc kubenswrapper[4996]: I0228 09:24:14.304903 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:24:16 crc kubenswrapper[4996]: I0228 09:24:16.556258 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" containerName="rabbitmq" containerID="cri-o://2513f5615f3168be7844d6c7b8e824184736476692ac5f97a547ef674f00c3eb" gracePeriod=604796 Feb 28 09:24:18 crc kubenswrapper[4996]: I0228 09:24:18.527698 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerName="rabbitmq" containerID="cri-o://926c9ffc7d896509d930bbdde07970546a6e4f3e11ff35c17c6870942134471d" gracePeriod=604796 Feb 28 09:24:20 crc kubenswrapper[4996]: I0228 09:24:20.358605 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Feb 28 09:24:20 crc kubenswrapper[4996]: I0228 09:24:20.585643 4996 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.379997 4996 generic.go:334] "Generic (PLEG): container finished" podID="7dfcffc8-039f-459c-9f97-d8d595506234" containerID="2513f5615f3168be7844d6c7b8e824184736476692ac5f97a547ef674f00c3eb" exitCode=0 Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.380046 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7dfcffc8-039f-459c-9f97-d8d595506234","Type":"ContainerDied","Data":"2513f5615f3168be7844d6c7b8e824184736476692ac5f97a547ef674f00c3eb"} Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.591067 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644168 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-plugins-conf\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644230 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-tls\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644263 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-config-data\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644326 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srl78\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-kube-api-access-srl78\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644364 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7dfcffc8-039f-459c-9f97-d8d595506234-pod-info\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644410 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-server-conf\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644456 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-erlang-cookie\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644558 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-plugins\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644586 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644619 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7dfcffc8-039f-459c-9f97-d8d595506234-erlang-cookie-secret\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.644657 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-confd\") pod \"7dfcffc8-039f-459c-9f97-d8d595506234\" (UID: \"7dfcffc8-039f-459c-9f97-d8d595506234\") " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.645509 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.646203 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.646228 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.651861 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-kube-api-access-srl78" (OuterVolumeSpecName: "kube-api-access-srl78") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "kube-api-access-srl78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.652063 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.654345 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dfcffc8-039f-459c-9f97-d8d595506234-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.666429 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7dfcffc8-039f-459c-9f97-d8d595506234-pod-info" (OuterVolumeSpecName: "pod-info") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.701289 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.712443 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-config-data" (OuterVolumeSpecName: "config-data") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.746282 4996 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.746501 4996 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.748904 4996 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7dfcffc8-039f-459c-9f97-d8d595506234-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.752279 4996 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.752362 4996 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.752451 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.752542 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srl78\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-kube-api-access-srl78\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.752621 4996 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7dfcffc8-039f-459c-9f97-d8d595506234-pod-info\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.757839 4996 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.750676 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-server-conf" (OuterVolumeSpecName: "server-conf") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.784343 4996 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.807963 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7dfcffc8-039f-459c-9f97-d8d595506234" (UID: "7dfcffc8-039f-459c-9f97-d8d595506234"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.859935 4996 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.859982 4996 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7dfcffc8-039f-459c-9f97-d8d595506234-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:23 crc kubenswrapper[4996]: I0228 09:24:23.859994 4996 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7dfcffc8-039f-459c-9f97-d8d595506234-server-conf\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.389887 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7dfcffc8-039f-459c-9f97-d8d595506234","Type":"ContainerDied","Data":"eade5aa9446da0bf28340201432908fb89b03a3f9170ea73fa529b895ac03915"} Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.389951 4996 scope.go:117] "RemoveContainer" containerID="2513f5615f3168be7844d6c7b8e824184736476692ac5f97a547ef674f00c3eb" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.390147 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.436556 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.445462 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.454256 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:24:24 crc kubenswrapper[4996]: E0228 09:24:24.454670 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" containerName="rabbitmq" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.454688 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" containerName="rabbitmq" Feb 28 09:24:24 crc kubenswrapper[4996]: E0228 09:24:24.454705 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" containerName="setup-container" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.454713 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" containerName="setup-container" Feb 28 09:24:24 crc kubenswrapper[4996]: E0228 09:24:24.454737 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d715edd-156b-434f-8c63-0f6ef5314659" containerName="oc" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.454743 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d715edd-156b-434f-8c63-0f6ef5314659" containerName="oc" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.454908 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" containerName="rabbitmq" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.454928 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d715edd-156b-434f-8c63-0f6ef5314659" containerName="oc" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.455847 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.457365 4996 scope.go:117] "RemoveContainer" containerID="3ed538687163ff88d97f08dd4b725bdd040099b40e9ac9d1bdbc3e2e3d8f19f4" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.459646 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.459909 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.460078 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.460198 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hvgx4" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.460321 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.460427 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.460562 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.494509 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570204 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9m7v\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-kube-api-access-b9m7v\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570304 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f61ee28f-ef2a-45ee-9832-57559af20a84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570342 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570366 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570384 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-config-data\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570505 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f61ee28f-ef2a-45ee-9832-57559af20a84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570523 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570556 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570600 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570622 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.570642 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.672939 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673110 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673184 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673224 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673318 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9m7v\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-kube-api-access-b9m7v\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673401 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f61ee28f-ef2a-45ee-9832-57559af20a84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673457 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673505 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673538 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-config-data\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673618 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f61ee28f-ef2a-45ee-9832-57559af20a84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673658 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673679 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.673457 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.674228 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.675462 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.675622 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.676200 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f61ee28f-ef2a-45ee-9832-57559af20a84-config-data\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.676880 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.677389 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f61ee28f-ef2a-45ee-9832-57559af20a84-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.681916 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.687900 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f61ee28f-ef2a-45ee-9832-57559af20a84-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.693994 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9m7v\" (UniqueName: \"kubernetes.io/projected/f61ee28f-ef2a-45ee-9832-57559af20a84-kube-api-access-b9m7v\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.712029 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f61ee28f-ef2a-45ee-9832-57559af20a84\") " pod="openstack/rabbitmq-server-0" Feb 28 09:24:24 crc kubenswrapper[4996]: I0228 09:24:24.828854 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.045275 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfcffc8-039f-459c-9f97-d8d595506234" path="/var/lib/kubelet/pods/7dfcffc8-039f-459c-9f97-d8d595506234/volumes" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.347570 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.400368 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f61ee28f-ef2a-45ee-9832-57559af20a84","Type":"ContainerStarted","Data":"f505d8f8a62831795aaa66fbfdec5e0b21a77f6b5a4db50729af625f0896976c"} Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.406584 4996 generic.go:334] "Generic (PLEG): container finished" podID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerID="926c9ffc7d896509d930bbdde07970546a6e4f3e11ff35c17c6870942134471d" exitCode=0 Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.406679 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d394b420-eb09-49f3-a92c-32cbed3f63eb","Type":"ContainerDied","Data":"926c9ffc7d896509d930bbdde07970546a6e4f3e11ff35c17c6870942134471d"} Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.618266 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.690955 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-config-data\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.691062 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-plugins\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.691109 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d394b420-eb09-49f3-a92c-32cbed3f63eb-pod-info\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.691143 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.691203 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-tls\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.692111 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.694965 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d394b420-eb09-49f3-a92c-32cbed3f63eb-erlang-cookie-secret\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.695021 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-erlang-cookie\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.695140 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhf65\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-kube-api-access-qhf65\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.695192 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-server-conf\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.695230 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-confd\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.695268 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-plugins-conf\") pod \"d394b420-eb09-49f3-a92c-32cbed3f63eb\" (UID: \"d394b420-eb09-49f3-a92c-32cbed3f63eb\") " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.695894 4996 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.696376 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.696772 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.697193 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.698184 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.698461 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d394b420-eb09-49f3-a92c-32cbed3f63eb-pod-info" (OuterVolumeSpecName: "pod-info") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.699146 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-kube-api-access-qhf65" (OuterVolumeSpecName: "kube-api-access-qhf65") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "kube-api-access-qhf65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.700283 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d394b420-eb09-49f3-a92c-32cbed3f63eb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.722921 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-config-data" (OuterVolumeSpecName: "config-data") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.748402 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-server-conf" (OuterVolumeSpecName: "server-conf") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.790294 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d394b420-eb09-49f3-a92c-32cbed3f63eb" (UID: "d394b420-eb09-49f3-a92c-32cbed3f63eb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799364 4996 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d394b420-eb09-49f3-a92c-32cbed3f63eb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799401 4996 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799412 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhf65\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-kube-api-access-qhf65\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799421 4996 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-server-conf\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799431 4996 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799439 4996 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799447 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d394b420-eb09-49f3-a92c-32cbed3f63eb-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799457 4996 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d394b420-eb09-49f3-a92c-32cbed3f63eb-pod-info\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799497 4996 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.799508 4996 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d394b420-eb09-49f3-a92c-32cbed3f63eb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.817323 4996 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 28 09:24:25 crc kubenswrapper[4996]: I0228 09:24:25.901132 4996 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.422583 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d394b420-eb09-49f3-a92c-32cbed3f63eb","Type":"ContainerDied","Data":"d18810ae55bbe7a84a060d05c9cdb88fc705bae20c89f9884c92f3f223e8b1fb"} Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.422925 4996 scope.go:117] "RemoveContainer" containerID="926c9ffc7d896509d930bbdde07970546a6e4f3e11ff35c17c6870942134471d" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.422652 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.445967 4996 scope.go:117] "RemoveContainer" containerID="a25211526baa7403188021f3fc538e87de9e8c663c7b688919dd5d11965d108c" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.459157 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.475244 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.514636 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:24:26 crc kubenswrapper[4996]: E0228 09:24:26.515046 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerName="setup-container" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.515061 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerName="setup-container" Feb 28 09:24:26 crc kubenswrapper[4996]: E0228 09:24:26.515097 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerName="rabbitmq" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.515104 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerName="rabbitmq" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.515254 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" containerName="rabbitmq" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.516974 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.520460 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.520665 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.520886 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.521136 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-27dtj" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.521760 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.522168 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.522335 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.523209 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.615885 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77efb507-fab5-4164-8cd8-576b15f4d6f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.615942 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.615966 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.615992 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.616039 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.616062 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.616130 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.616161 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.616177 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.616204 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grwv\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-kube-api-access-8grwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.616234 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77efb507-fab5-4164-8cd8-576b15f4d6f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.716713 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.716948 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.716975 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grwv\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-kube-api-access-8grwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717020 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77efb507-fab5-4164-8cd8-576b15f4d6f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717042 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77efb507-fab5-4164-8cd8-576b15f4d6f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717069 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717087 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717105 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717137 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717164 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717224 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717282 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717465 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.717571 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.718884 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.719540 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.719787 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77efb507-fab5-4164-8cd8-576b15f4d6f8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.739723 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grwv\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-kube-api-access-8grwv\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.744926 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.798232 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.798799 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/77efb507-fab5-4164-8cd8-576b15f4d6f8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.799173 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/77efb507-fab5-4164-8cd8-576b15f4d6f8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.801582 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/77efb507-fab5-4164-8cd8-576b15f4d6f8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"77efb507-fab5-4164-8cd8-576b15f4d6f8\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:26 crc kubenswrapper[4996]: I0228 09:24:26.948761 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:24:27 crc kubenswrapper[4996]: I0228 09:24:27.048076 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d394b420-eb09-49f3-a92c-32cbed3f63eb" path="/var/lib/kubelet/pods/d394b420-eb09-49f3-a92c-32cbed3f63eb/volumes" Feb 28 09:24:27 crc kubenswrapper[4996]: I0228 09:24:27.410060 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 09:24:27 crc kubenswrapper[4996]: I0228 09:24:27.456413 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77efb507-fab5-4164-8cd8-576b15f4d6f8","Type":"ContainerStarted","Data":"a6906d5312769f981879106badf982c697a5a9518eb4f97cd0cbe15e7b750590"} Feb 28 09:24:27 crc kubenswrapper[4996]: I0228 09:24:27.458467 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f61ee28f-ef2a-45ee-9832-57559af20a84","Type":"ContainerStarted","Data":"5cf22b26d5095eedc59d7682d13dd913ad8b60f8d2d11ecff7a6913d0285f9a3"} Feb 28 09:24:29 crc kubenswrapper[4996]: I0228 09:24:29.482299 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77efb507-fab5-4164-8cd8-576b15f4d6f8","Type":"ContainerStarted","Data":"f14aff4e8ddcbb6b5c010280cf88a0ea732b40517a8d64d97466b09d46f8221d"} Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.739785 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-m8hrk"] Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.742120 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.748678 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.772687 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-m8hrk"] Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.923933 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.924066 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.924102 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx89m\" (UniqueName: \"kubernetes.io/projected/9f39025f-38f5-42f6-b02e-87940c2be3a4-kube-api-access-wx89m\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.924149 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.924204 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:31 crc kubenswrapper[4996]: I0228 09:24:31.924238 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-config\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.026408 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.026517 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.026546 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx89m\" (UniqueName: \"kubernetes.io/projected/9f39025f-38f5-42f6-b02e-87940c2be3a4-kube-api-access-wx89m\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.026568 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.026614 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.026642 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-config\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.027860 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-config\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.028453 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.028502 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.028753 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.028816 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.055234 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx89m\" (UniqueName: \"kubernetes.io/projected/9f39025f-38f5-42f6-b02e-87940c2be3a4-kube-api-access-wx89m\") pod \"dnsmasq-dns-6447ccbd8f-m8hrk\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.066313 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:32 crc kubenswrapper[4996]: I0228 09:24:32.646423 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-m8hrk"] Feb 28 09:24:33 crc kubenswrapper[4996]: I0228 09:24:33.534056 4996 generic.go:334] "Generic (PLEG): container finished" podID="9f39025f-38f5-42f6-b02e-87940c2be3a4" containerID="f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213" exitCode=0 Feb 28 09:24:33 crc kubenswrapper[4996]: I0228 09:24:33.534810 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" event={"ID":"9f39025f-38f5-42f6-b02e-87940c2be3a4","Type":"ContainerDied","Data":"f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213"} Feb 28 09:24:33 crc kubenswrapper[4996]: I0228 09:24:33.534997 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" event={"ID":"9f39025f-38f5-42f6-b02e-87940c2be3a4","Type":"ContainerStarted","Data":"5db983b9c8cbab3ff00d0ee262f7b8fb3955247f736611f890dafa6020e1eb10"} Feb 28 09:24:34 crc kubenswrapper[4996]: I0228 09:24:34.553513 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" event={"ID":"9f39025f-38f5-42f6-b02e-87940c2be3a4","Type":"ContainerStarted","Data":"9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2"} Feb 28 09:24:34 crc kubenswrapper[4996]: I0228 09:24:34.553731 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:34 crc kubenswrapper[4996]: I0228 09:24:34.602961 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" podStartSLOduration=3.6029322649999997 podStartE2EDuration="3.602932265s" podCreationTimestamp="2026-02-28 09:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:24:34.586075142 +0000 UTC m=+1438.276877983" watchObservedRunningTime="2026-02-28 09:24:34.602932265 +0000 UTC m=+1438.293735106" Feb 28 09:24:40 crc kubenswrapper[4996]: I0228 09:24:40.867731 4996 scope.go:117] "RemoveContainer" containerID="054e942c70d30c1f905549216637a4e5fdcaeff9563c073c28b07d53aa2fbad7" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.068218 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.168742 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ghcc5"] Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.169281 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" podUID="25f68b33-2062-44d2-bf66-0220e8e98a58" containerName="dnsmasq-dns" containerID="cri-o://861bf2f5723fcc3cabceb1af29dfdf2c89b37e0b6070d6b3e8ab87702cd7f064" gracePeriod=10 Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.310937 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-l6k66"] Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.315508 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.333423 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-l6k66"] Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.368258 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.368332 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-config\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.368378 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.368410 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.368662 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.368712 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxjrb\" (UniqueName: \"kubernetes.io/projected/80cf7647-e72e-464f-960c-decb8700cb2d-kube-api-access-kxjrb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.470295 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.470343 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxjrb\" (UniqueName: \"kubernetes.io/projected/80cf7647-e72e-464f-960c-decb8700cb2d-kube-api-access-kxjrb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.470420 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.470470 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-config\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.470537 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.470570 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.471359 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.471846 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.472650 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.473139 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-config\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.473208 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.511086 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxjrb\" (UniqueName: \"kubernetes.io/projected/80cf7647-e72e-464f-960c-decb8700cb2d-kube-api-access-kxjrb\") pod \"dnsmasq-dns-864d5fc68c-l6k66\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.641113 4996 generic.go:334] "Generic (PLEG): container finished" podID="25f68b33-2062-44d2-bf66-0220e8e98a58" containerID="861bf2f5723fcc3cabceb1af29dfdf2c89b37e0b6070d6b3e8ab87702cd7f064" exitCode=0 Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.641167 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" event={"ID":"25f68b33-2062-44d2-bf66-0220e8e98a58","Type":"ContainerDied","Data":"861bf2f5723fcc3cabceb1af29dfdf2c89b37e0b6070d6b3e8ab87702cd7f064"} Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.641204 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" event={"ID":"25f68b33-2062-44d2-bf66-0220e8e98a58","Type":"ContainerDied","Data":"295d15878ba32e19b329d4484a4db0bc42c73b54eba2d1a3df923751b4fe86c7"} Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.641220 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="295d15878ba32e19b329d4484a4db0bc42c73b54eba2d1a3df923751b4fe86c7" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.669850 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.670614 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.672388 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s295b\" (UniqueName: \"kubernetes.io/projected/25f68b33-2062-44d2-bf66-0220e8e98a58-kube-api-access-s295b\") pod \"25f68b33-2062-44d2-bf66-0220e8e98a58\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.672588 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-nb\") pod \"25f68b33-2062-44d2-bf66-0220e8e98a58\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.673406 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-sb\") pod \"25f68b33-2062-44d2-bf66-0220e8e98a58\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.673572 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-config\") pod \"25f68b33-2062-44d2-bf66-0220e8e98a58\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.673829 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-dns-svc\") pod \"25f68b33-2062-44d2-bf66-0220e8e98a58\" (UID: \"25f68b33-2062-44d2-bf66-0220e8e98a58\") " Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.676309 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f68b33-2062-44d2-bf66-0220e8e98a58-kube-api-access-s295b" (OuterVolumeSpecName: "kube-api-access-s295b") pod "25f68b33-2062-44d2-bf66-0220e8e98a58" (UID: "25f68b33-2062-44d2-bf66-0220e8e98a58"). InnerVolumeSpecName "kube-api-access-s295b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.741764 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25f68b33-2062-44d2-bf66-0220e8e98a58" (UID: "25f68b33-2062-44d2-bf66-0220e8e98a58"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.742712 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-config" (OuterVolumeSpecName: "config") pod "25f68b33-2062-44d2-bf66-0220e8e98a58" (UID: "25f68b33-2062-44d2-bf66-0220e8e98a58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.748099 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25f68b33-2062-44d2-bf66-0220e8e98a58" (UID: "25f68b33-2062-44d2-bf66-0220e8e98a58"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.777318 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s295b\" (UniqueName: \"kubernetes.io/projected/25f68b33-2062-44d2-bf66-0220e8e98a58-kube-api-access-s295b\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.777560 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.777572 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.777588 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.785429 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25f68b33-2062-44d2-bf66-0220e8e98a58" (UID: "25f68b33-2062-44d2-bf66-0220e8e98a58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:42 crc kubenswrapper[4996]: I0228 09:24:42.878448 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f68b33-2062-44d2-bf66-0220e8e98a58-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:43 crc kubenswrapper[4996]: I0228 09:24:43.109294 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-l6k66"] Feb 28 09:24:43 crc kubenswrapper[4996]: W0228 09:24:43.113400 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80cf7647_e72e_464f_960c_decb8700cb2d.slice/crio-bbd02af6db264e77d647ab144702a7475f31de3054b296cd72331f4a45227405 WatchSource:0}: Error finding container bbd02af6db264e77d647ab144702a7475f31de3054b296cd72331f4a45227405: Status 404 returned error can't find the container with id bbd02af6db264e77d647ab144702a7475f31de3054b296cd72331f4a45227405 Feb 28 09:24:43 crc kubenswrapper[4996]: I0228 09:24:43.650730 4996 generic.go:334] "Generic (PLEG): container finished" podID="80cf7647-e72e-464f-960c-decb8700cb2d" containerID="75eb9fb63463ebfc71fcfbf0e23fe48ecad65a484623a790df872fbde2365683" exitCode=0 Feb 28 09:24:43 crc kubenswrapper[4996]: I0228 09:24:43.651168 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-ghcc5" Feb 28 09:24:43 crc kubenswrapper[4996]: I0228 09:24:43.652058 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" event={"ID":"80cf7647-e72e-464f-960c-decb8700cb2d","Type":"ContainerDied","Data":"75eb9fb63463ebfc71fcfbf0e23fe48ecad65a484623a790df872fbde2365683"} Feb 28 09:24:43 crc kubenswrapper[4996]: I0228 09:24:43.652116 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" event={"ID":"80cf7647-e72e-464f-960c-decb8700cb2d","Type":"ContainerStarted","Data":"bbd02af6db264e77d647ab144702a7475f31de3054b296cd72331f4a45227405"} Feb 28 09:24:43 crc kubenswrapper[4996]: I0228 09:24:43.725615 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ghcc5"] Feb 28 09:24:43 crc kubenswrapper[4996]: I0228 09:24:43.742143 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-ghcc5"] Feb 28 09:24:44 crc kubenswrapper[4996]: I0228 09:24:44.661067 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" event={"ID":"80cf7647-e72e-464f-960c-decb8700cb2d","Type":"ContainerStarted","Data":"18b617ac2d3c338d299aa0cddf81e2f2d0ea50e7e022b2d203354045ffcfcc6c"} Feb 28 09:24:44 crc kubenswrapper[4996]: I0228 09:24:44.661546 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:44 crc kubenswrapper[4996]: I0228 09:24:44.696197 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" podStartSLOduration=2.696178111 podStartE2EDuration="2.696178111s" podCreationTimestamp="2026-02-28 09:24:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:24:44.683647114 +0000 UTC m=+1448.374449985" watchObservedRunningTime="2026-02-28 09:24:44.696178111 +0000 UTC m=+1448.386980932" Feb 28 09:24:45 crc kubenswrapper[4996]: I0228 09:24:45.047694 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f68b33-2062-44d2-bf66-0220e8e98a58" path="/var/lib/kubelet/pods/25f68b33-2062-44d2-bf66-0220e8e98a58/volumes" Feb 28 09:24:52 crc kubenswrapper[4996]: I0228 09:24:52.672144 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:24:52 crc kubenswrapper[4996]: I0228 09:24:52.762514 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-m8hrk"] Feb 28 09:24:52 crc kubenswrapper[4996]: I0228 09:24:52.762882 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" podUID="9f39025f-38f5-42f6-b02e-87940c2be3a4" containerName="dnsmasq-dns" containerID="cri-o://9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2" gracePeriod=10 Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.319765 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.485651 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-config\") pod \"9f39025f-38f5-42f6-b02e-87940c2be3a4\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.486582 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-dns-svc\") pod \"9f39025f-38f5-42f6-b02e-87940c2be3a4\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.486688 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-nb\") pod \"9f39025f-38f5-42f6-b02e-87940c2be3a4\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.486725 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-sb\") pod \"9f39025f-38f5-42f6-b02e-87940c2be3a4\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.486773 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx89m\" (UniqueName: \"kubernetes.io/projected/9f39025f-38f5-42f6-b02e-87940c2be3a4-kube-api-access-wx89m\") pod \"9f39025f-38f5-42f6-b02e-87940c2be3a4\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.486811 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-openstack-edpm-ipam\") pod \"9f39025f-38f5-42f6-b02e-87940c2be3a4\" (UID: \"9f39025f-38f5-42f6-b02e-87940c2be3a4\") " Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.507220 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f39025f-38f5-42f6-b02e-87940c2be3a4-kube-api-access-wx89m" (OuterVolumeSpecName: "kube-api-access-wx89m") pod "9f39025f-38f5-42f6-b02e-87940c2be3a4" (UID: "9f39025f-38f5-42f6-b02e-87940c2be3a4"). InnerVolumeSpecName "kube-api-access-wx89m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.590252 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx89m\" (UniqueName: \"kubernetes.io/projected/9f39025f-38f5-42f6-b02e-87940c2be3a4-kube-api-access-wx89m\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.590868 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f39025f-38f5-42f6-b02e-87940c2be3a4" (UID: "9f39025f-38f5-42f6-b02e-87940c2be3a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.608544 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f39025f-38f5-42f6-b02e-87940c2be3a4" (UID: "9f39025f-38f5-42f6-b02e-87940c2be3a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.613792 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f39025f-38f5-42f6-b02e-87940c2be3a4" (UID: "9f39025f-38f5-42f6-b02e-87940c2be3a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.614345 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9f39025f-38f5-42f6-b02e-87940c2be3a4" (UID: "9f39025f-38f5-42f6-b02e-87940c2be3a4"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.627237 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-config" (OuterVolumeSpecName: "config") pod "9f39025f-38f5-42f6-b02e-87940c2be3a4" (UID: "9f39025f-38f5-42f6-b02e-87940c2be3a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.691509 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.691552 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.691565 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.691577 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.691589 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f39025f-38f5-42f6-b02e-87940c2be3a4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.795886 4996 generic.go:334] "Generic (PLEG): container finished" podID="9f39025f-38f5-42f6-b02e-87940c2be3a4" containerID="9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2" exitCode=0 Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.795930 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" event={"ID":"9f39025f-38f5-42f6-b02e-87940c2be3a4","Type":"ContainerDied","Data":"9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2"} Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.795950 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.795981 4996 scope.go:117] "RemoveContainer" containerID="9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.795960 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-m8hrk" event={"ID":"9f39025f-38f5-42f6-b02e-87940c2be3a4","Type":"ContainerDied","Data":"5db983b9c8cbab3ff00d0ee262f7b8fb3955247f736611f890dafa6020e1eb10"} Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.819421 4996 scope.go:117] "RemoveContainer" containerID="f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.826649 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-m8hrk"] Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.834908 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-m8hrk"] Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.853520 4996 scope.go:117] "RemoveContainer" containerID="9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2" Feb 28 09:24:53 crc kubenswrapper[4996]: E0228 09:24:53.854074 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2\": container with ID starting with 9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2 not found: ID does not exist" containerID="9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.854123 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2"} err="failed to get container status \"9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2\": rpc error: code = NotFound desc = could not find container \"9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2\": container with ID starting with 9c56e208c80ddc2513ffeb0d85efc15a159b282cdfeac69dd2fb1d7ea6748ad2 not found: ID does not exist" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.854152 4996 scope.go:117] "RemoveContainer" containerID="f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213" Feb 28 09:24:53 crc kubenswrapper[4996]: E0228 09:24:53.854563 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213\": container with ID starting with f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213 not found: ID does not exist" containerID="f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213" Feb 28 09:24:53 crc kubenswrapper[4996]: I0228 09:24:53.854597 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213"} err="failed to get container status \"f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213\": rpc error: code = NotFound desc = could not find container \"f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213\": container with ID starting with f00aa1c6e3088e68de3fa714b1808c07d6ee611d17f4b0760625533d03c76213 not found: ID does not exist" Feb 28 09:24:55 crc kubenswrapper[4996]: I0228 09:24:55.043746 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f39025f-38f5-42f6-b02e-87940c2be3a4" path="/var/lib/kubelet/pods/9f39025f-38f5-42f6-b02e-87940c2be3a4/volumes" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.354994 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j"] Feb 28 09:24:58 crc kubenswrapper[4996]: E0228 09:24:58.355842 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f68b33-2062-44d2-bf66-0220e8e98a58" containerName="init" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.355855 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f68b33-2062-44d2-bf66-0220e8e98a58" containerName="init" Feb 28 09:24:58 crc kubenswrapper[4996]: E0228 09:24:58.355889 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f39025f-38f5-42f6-b02e-87940c2be3a4" containerName="dnsmasq-dns" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.355895 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f39025f-38f5-42f6-b02e-87940c2be3a4" containerName="dnsmasq-dns" Feb 28 09:24:58 crc kubenswrapper[4996]: E0228 09:24:58.355905 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f68b33-2062-44d2-bf66-0220e8e98a58" containerName="dnsmasq-dns" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.355911 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f68b33-2062-44d2-bf66-0220e8e98a58" containerName="dnsmasq-dns" Feb 28 09:24:58 crc kubenswrapper[4996]: E0228 09:24:58.355923 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f39025f-38f5-42f6-b02e-87940c2be3a4" containerName="init" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.355930 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f39025f-38f5-42f6-b02e-87940c2be3a4" containerName="init" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.356107 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f39025f-38f5-42f6-b02e-87940c2be3a4" containerName="dnsmasq-dns" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.356131 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f68b33-2062-44d2-bf66-0220e8e98a58" containerName="dnsmasq-dns" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.356723 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.358612 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.358991 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.359578 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.359833 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.377814 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j"] Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.423947 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fctk5\" (UniqueName: \"kubernetes.io/projected/5a18a047-3d57-467d-a116-6ccd83c7b54a-kube-api-access-fctk5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.424026 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.424124 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.424173 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.526195 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fctk5\" (UniqueName: \"kubernetes.io/projected/5a18a047-3d57-467d-a116-6ccd83c7b54a-kube-api-access-fctk5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.526248 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.526324 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.526353 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.532353 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.535458 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.535965 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.549531 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fctk5\" (UniqueName: \"kubernetes.io/projected/5a18a047-3d57-467d-a116-6ccd83c7b54a-kube-api-access-fctk5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:58 crc kubenswrapper[4996]: I0228 09:24:58.678551 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:24:59 crc kubenswrapper[4996]: I0228 09:24:59.298413 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j"] Feb 28 09:24:59 crc kubenswrapper[4996]: W0228 09:24:59.304929 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a18a047_3d57_467d_a116_6ccd83c7b54a.slice/crio-d7367682dea94d93a28e6c8701219ecb350d4c8e5937cbd63980324af8485ca4 WatchSource:0}: Error finding container d7367682dea94d93a28e6c8701219ecb350d4c8e5937cbd63980324af8485ca4: Status 404 returned error can't find the container with id d7367682dea94d93a28e6c8701219ecb350d4c8e5937cbd63980324af8485ca4 Feb 28 09:24:59 crc kubenswrapper[4996]: I0228 09:24:59.866653 4996 generic.go:334] "Generic (PLEG): container finished" podID="f61ee28f-ef2a-45ee-9832-57559af20a84" containerID="5cf22b26d5095eedc59d7682d13dd913ad8b60f8d2d11ecff7a6913d0285f9a3" exitCode=0 Feb 28 09:24:59 crc kubenswrapper[4996]: I0228 09:24:59.866768 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f61ee28f-ef2a-45ee-9832-57559af20a84","Type":"ContainerDied","Data":"5cf22b26d5095eedc59d7682d13dd913ad8b60f8d2d11ecff7a6913d0285f9a3"} Feb 28 09:24:59 crc kubenswrapper[4996]: I0228 09:24:59.871996 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" event={"ID":"5a18a047-3d57-467d-a116-6ccd83c7b54a","Type":"ContainerStarted","Data":"d7367682dea94d93a28e6c8701219ecb350d4c8e5937cbd63980324af8485ca4"} Feb 28 09:25:00 crc kubenswrapper[4996]: I0228 09:25:00.882727 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f61ee28f-ef2a-45ee-9832-57559af20a84","Type":"ContainerStarted","Data":"3fdaff573f3a689234ade0e40cba1160731bf38238fa72be0f300e1bd7d53cd9"} Feb 28 09:25:00 crc kubenswrapper[4996]: I0228 09:25:00.883963 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 28 09:25:00 crc kubenswrapper[4996]: I0228 09:25:00.923319 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.92330446 podStartE2EDuration="36.92330446s" podCreationTimestamp="2026-02-28 09:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:25:00.92043598 +0000 UTC m=+1464.611238801" watchObservedRunningTime="2026-02-28 09:25:00.92330446 +0000 UTC m=+1464.614107261" Feb 28 09:25:01 crc kubenswrapper[4996]: I0228 09:25:01.897202 4996 generic.go:334] "Generic (PLEG): container finished" podID="77efb507-fab5-4164-8cd8-576b15f4d6f8" containerID="f14aff4e8ddcbb6b5c010280cf88a0ea732b40517a8d64d97466b09d46f8221d" exitCode=0 Feb 28 09:25:01 crc kubenswrapper[4996]: I0228 09:25:01.897295 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77efb507-fab5-4164-8cd8-576b15f4d6f8","Type":"ContainerDied","Data":"f14aff4e8ddcbb6b5c010280cf88a0ea732b40517a8d64d97466b09d46f8221d"} Feb 28 09:25:02 crc kubenswrapper[4996]: I0228 09:25:02.909897 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"77efb507-fab5-4164-8cd8-576b15f4d6f8","Type":"ContainerStarted","Data":"901c8631d0643285ecfb4f0a455413689202c8e97a56c5a6b37fea7df278534f"} Feb 28 09:25:02 crc kubenswrapper[4996]: I0228 09:25:02.910397 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:25:02 crc kubenswrapper[4996]: I0228 09:25:02.944065 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.94403812 podStartE2EDuration="36.94403812s" podCreationTimestamp="2026-02-28 09:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:25:02.93256131 +0000 UTC m=+1466.623364141" watchObservedRunningTime="2026-02-28 09:25:02.94403812 +0000 UTC m=+1466.634840941" Feb 28 09:25:08 crc kubenswrapper[4996]: I0228 09:25:08.981431 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" event={"ID":"5a18a047-3d57-467d-a116-6ccd83c7b54a","Type":"ContainerStarted","Data":"58624d7af507025806c26c50d319710894c7ecb76d93f2a00be40d90c0e051be"} Feb 28 09:25:09 crc kubenswrapper[4996]: I0228 09:25:09.011992 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" podStartSLOduration=1.7912485550000001 podStartE2EDuration="11.011963252s" podCreationTimestamp="2026-02-28 09:24:58 +0000 UTC" firstStartedPulling="2026-02-28 09:24:59.307462763 +0000 UTC m=+1462.998265574" lastFinishedPulling="2026-02-28 09:25:08.52817746 +0000 UTC m=+1472.218980271" observedRunningTime="2026-02-28 09:25:08.999407905 +0000 UTC m=+1472.690210756" watchObservedRunningTime="2026-02-28 09:25:09.011963252 +0000 UTC m=+1472.702766083" Feb 28 09:25:14 crc kubenswrapper[4996]: I0228 09:25:14.833377 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 28 09:25:16 crc kubenswrapper[4996]: I0228 09:25:16.956219 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 28 09:25:19 crc kubenswrapper[4996]: I0228 09:25:19.100796 4996 generic.go:334] "Generic (PLEG): container finished" podID="5a18a047-3d57-467d-a116-6ccd83c7b54a" containerID="58624d7af507025806c26c50d319710894c7ecb76d93f2a00be40d90c0e051be" exitCode=0 Feb 28 09:25:19 crc kubenswrapper[4996]: I0228 09:25:19.101113 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" event={"ID":"5a18a047-3d57-467d-a116-6ccd83c7b54a","Type":"ContainerDied","Data":"58624d7af507025806c26c50d319710894c7ecb76d93f2a00be40d90c0e051be"} Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.521376 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.700681 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-inventory\") pod \"5a18a047-3d57-467d-a116-6ccd83c7b54a\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.701115 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fctk5\" (UniqueName: \"kubernetes.io/projected/5a18a047-3d57-467d-a116-6ccd83c7b54a-kube-api-access-fctk5\") pod \"5a18a047-3d57-467d-a116-6ccd83c7b54a\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.701268 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-ssh-key-openstack-edpm-ipam\") pod \"5a18a047-3d57-467d-a116-6ccd83c7b54a\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.701420 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-repo-setup-combined-ca-bundle\") pod \"5a18a047-3d57-467d-a116-6ccd83c7b54a\" (UID: \"5a18a047-3d57-467d-a116-6ccd83c7b54a\") " Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.707045 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a18a047-3d57-467d-a116-6ccd83c7b54a-kube-api-access-fctk5" (OuterVolumeSpecName: "kube-api-access-fctk5") pod "5a18a047-3d57-467d-a116-6ccd83c7b54a" (UID: "5a18a047-3d57-467d-a116-6ccd83c7b54a"). InnerVolumeSpecName "kube-api-access-fctk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.708426 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5a18a047-3d57-467d-a116-6ccd83c7b54a" (UID: "5a18a047-3d57-467d-a116-6ccd83c7b54a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.730962 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5a18a047-3d57-467d-a116-6ccd83c7b54a" (UID: "5a18a047-3d57-467d-a116-6ccd83c7b54a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.755315 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-inventory" (OuterVolumeSpecName: "inventory") pod "5a18a047-3d57-467d-a116-6ccd83c7b54a" (UID: "5a18a047-3d57-467d-a116-6ccd83c7b54a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.804493 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.804540 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fctk5\" (UniqueName: \"kubernetes.io/projected/5a18a047-3d57-467d-a116-6ccd83c7b54a-kube-api-access-fctk5\") on node \"crc\" DevicePath \"\"" Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.804555 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:25:20 crc kubenswrapper[4996]: I0228 09:25:20.804573 4996 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a18a047-3d57-467d-a116-6ccd83c7b54a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.126306 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" event={"ID":"5a18a047-3d57-467d-a116-6ccd83c7b54a","Type":"ContainerDied","Data":"d7367682dea94d93a28e6c8701219ecb350d4c8e5937cbd63980324af8485ca4"} Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.126696 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7367682dea94d93a28e6c8701219ecb350d4c8e5937cbd63980324af8485ca4" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.126437 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.233881 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf"] Feb 28 09:25:21 crc kubenswrapper[4996]: E0228 09:25:21.234415 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a18a047-3d57-467d-a116-6ccd83c7b54a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.234448 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a18a047-3d57-467d-a116-6ccd83c7b54a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.234718 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a18a047-3d57-467d-a116-6ccd83c7b54a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.235667 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.239567 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.239589 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.240260 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.248363 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.254969 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf"] Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.417552 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.417618 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mptvg\" (UniqueName: \"kubernetes.io/projected/17f535e2-b6a2-40de-bf63-160b7aeb3b70-kube-api-access-mptvg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.417755 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.417843 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.520146 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mptvg\" (UniqueName: \"kubernetes.io/projected/17f535e2-b6a2-40de-bf63-160b7aeb3b70-kube-api-access-mptvg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.520213 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.520243 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.520421 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.525956 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.526027 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.527661 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.551949 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mptvg\" (UniqueName: \"kubernetes.io/projected/17f535e2-b6a2-40de-bf63-160b7aeb3b70-kube-api-access-mptvg\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:21 crc kubenswrapper[4996]: I0228 09:25:21.556368 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:25:22 crc kubenswrapper[4996]: I0228 09:25:22.099335 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf"] Feb 28 09:25:22 crc kubenswrapper[4996]: I0228 09:25:22.145234 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" event={"ID":"17f535e2-b6a2-40de-bf63-160b7aeb3b70","Type":"ContainerStarted","Data":"1aba80f2842b5bdf1d4c8341dad4a1596b7dea227fe3a32aad792d0fb25480d2"} Feb 28 09:25:23 crc kubenswrapper[4996]: I0228 09:25:23.158607 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" event={"ID":"17f535e2-b6a2-40de-bf63-160b7aeb3b70","Type":"ContainerStarted","Data":"05619ef064203ce8ae9d29be8ef07bc7f52c07f17927f3e11b01f490217c1834"} Feb 28 09:25:23 crc kubenswrapper[4996]: I0228 09:25:23.188588 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" podStartSLOduration=1.536124192 podStartE2EDuration="2.188564459s" podCreationTimestamp="2026-02-28 09:25:21 +0000 UTC" firstStartedPulling="2026-02-28 09:25:22.104076009 +0000 UTC m=+1485.794878840" lastFinishedPulling="2026-02-28 09:25:22.756516276 +0000 UTC m=+1486.447319107" observedRunningTime="2026-02-28 09:25:23.177450189 +0000 UTC m=+1486.868253010" watchObservedRunningTime="2026-02-28 09:25:23.188564459 +0000 UTC m=+1486.879367270" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.140199 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537846-f58cg"] Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.141878 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537846-f58cg" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.144494 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.144518 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.144607 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.152855 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537846-f58cg"] Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.211710 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjdfn\" (UniqueName: \"kubernetes.io/projected/a66be9fb-71a8-4d58-8a10-1cbba0fe325f-kube-api-access-bjdfn\") pod \"auto-csr-approver-29537846-f58cg\" (UID: \"a66be9fb-71a8-4d58-8a10-1cbba0fe325f\") " pod="openshift-infra/auto-csr-approver-29537846-f58cg" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.313787 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjdfn\" (UniqueName: \"kubernetes.io/projected/a66be9fb-71a8-4d58-8a10-1cbba0fe325f-kube-api-access-bjdfn\") pod \"auto-csr-approver-29537846-f58cg\" (UID: \"a66be9fb-71a8-4d58-8a10-1cbba0fe325f\") " pod="openshift-infra/auto-csr-approver-29537846-f58cg" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.353507 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjdfn\" (UniqueName: \"kubernetes.io/projected/a66be9fb-71a8-4d58-8a10-1cbba0fe325f-kube-api-access-bjdfn\") pod \"auto-csr-approver-29537846-f58cg\" (UID: \"a66be9fb-71a8-4d58-8a10-1cbba0fe325f\") " pod="openshift-infra/auto-csr-approver-29537846-f58cg" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.467418 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537846-f58cg" Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.950594 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537846-f58cg"] Feb 28 09:26:00 crc kubenswrapper[4996]: I0228 09:26:00.958073 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:26:01 crc kubenswrapper[4996]: I0228 09:26:01.605187 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537846-f58cg" event={"ID":"a66be9fb-71a8-4d58-8a10-1cbba0fe325f","Type":"ContainerStarted","Data":"dc69ad4015493e6be0d53588059ed03fc4f089066cfcecd39a951aee51ab77b4"} Feb 28 09:26:02 crc kubenswrapper[4996]: E0228 09:26:02.429103 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda66be9fb_71a8_4d58_8a10_1cbba0fe325f.slice/crio-410d5881214954dcc76ac8c0ce6464581f4d9ad243e263b9cbca79d41f5aa82e.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:26:02 crc kubenswrapper[4996]: I0228 09:26:02.620416 4996 generic.go:334] "Generic (PLEG): container finished" podID="a66be9fb-71a8-4d58-8a10-1cbba0fe325f" containerID="410d5881214954dcc76ac8c0ce6464581f4d9ad243e263b9cbca79d41f5aa82e" exitCode=0 Feb 28 09:26:02 crc kubenswrapper[4996]: I0228 09:26:02.620482 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537846-f58cg" event={"ID":"a66be9fb-71a8-4d58-8a10-1cbba0fe325f","Type":"ContainerDied","Data":"410d5881214954dcc76ac8c0ce6464581f4d9ad243e263b9cbca79d41f5aa82e"} Feb 28 09:26:04 crc kubenswrapper[4996]: I0228 09:26:04.002863 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537846-f58cg" Feb 28 09:26:04 crc kubenswrapper[4996]: I0228 09:26:04.086988 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjdfn\" (UniqueName: \"kubernetes.io/projected/a66be9fb-71a8-4d58-8a10-1cbba0fe325f-kube-api-access-bjdfn\") pod \"a66be9fb-71a8-4d58-8a10-1cbba0fe325f\" (UID: \"a66be9fb-71a8-4d58-8a10-1cbba0fe325f\") " Feb 28 09:26:04 crc kubenswrapper[4996]: I0228 09:26:04.093618 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66be9fb-71a8-4d58-8a10-1cbba0fe325f-kube-api-access-bjdfn" (OuterVolumeSpecName: "kube-api-access-bjdfn") pod "a66be9fb-71a8-4d58-8a10-1cbba0fe325f" (UID: "a66be9fb-71a8-4d58-8a10-1cbba0fe325f"). InnerVolumeSpecName "kube-api-access-bjdfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:26:04 crc kubenswrapper[4996]: I0228 09:26:04.190309 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjdfn\" (UniqueName: \"kubernetes.io/projected/a66be9fb-71a8-4d58-8a10-1cbba0fe325f-kube-api-access-bjdfn\") on node \"crc\" DevicePath \"\"" Feb 28 09:26:04 crc kubenswrapper[4996]: I0228 09:26:04.646466 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537846-f58cg" event={"ID":"a66be9fb-71a8-4d58-8a10-1cbba0fe325f","Type":"ContainerDied","Data":"dc69ad4015493e6be0d53588059ed03fc4f089066cfcecd39a951aee51ab77b4"} Feb 28 09:26:04 crc kubenswrapper[4996]: I0228 09:26:04.646524 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc69ad4015493e6be0d53588059ed03fc4f089066cfcecd39a951aee51ab77b4" Feb 28 09:26:04 crc kubenswrapper[4996]: I0228 09:26:04.646530 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537846-f58cg" Feb 28 09:26:05 crc kubenswrapper[4996]: I0228 09:26:05.102884 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537840-zfw87"] Feb 28 09:26:05 crc kubenswrapper[4996]: I0228 09:26:05.110688 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537840-zfw87"] Feb 28 09:26:07 crc kubenswrapper[4996]: I0228 09:26:07.046413 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="555bf038-cf05-4545-9254-1ef90fad3514" path="/var/lib/kubelet/pods/555bf038-cf05-4545-9254-1ef90fad3514/volumes" Feb 28 09:26:12 crc kubenswrapper[4996]: I0228 09:26:12.249305 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:26:12 crc kubenswrapper[4996]: I0228 09:26:12.249649 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:26:41 crc kubenswrapper[4996]: I0228 09:26:41.185596 4996 scope.go:117] "RemoveContainer" containerID="4a2a1612129b304cbd3aa21f2aa1f131e06f3684f22a0e6d05b6f46704e293aa" Feb 28 09:26:41 crc kubenswrapper[4996]: I0228 09:26:41.238783 4996 scope.go:117] "RemoveContainer" containerID="b4ce85c1414d6913330f1610a7dab62169325f26bf8a8ddf181076f51f1d8a61" Feb 28 09:26:41 crc kubenswrapper[4996]: I0228 09:26:41.296259 4996 scope.go:117] "RemoveContainer" containerID="1d52320fe6d11c60755fa9079e52f16d6e24e81203ebb939756a2eebe4ab4153" Feb 28 09:26:41 crc kubenswrapper[4996]: I0228 09:26:41.333136 4996 scope.go:117] "RemoveContainer" containerID="66b7957b93432cd124025db8010c4f8251aa35695b9ca057cd207b0509f6fc75" Feb 28 09:26:41 crc kubenswrapper[4996]: I0228 09:26:41.364061 4996 scope.go:117] "RemoveContainer" containerID="c5b612763df809a963942ea8b68f42896f0908c5065bea965467f097ca17d47b" Feb 28 09:26:42 crc kubenswrapper[4996]: I0228 09:26:42.248565 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:26:42 crc kubenswrapper[4996]: I0228 09:26:42.248928 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:27:12 crc kubenswrapper[4996]: I0228 09:27:12.249069 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:27:12 crc kubenswrapper[4996]: I0228 09:27:12.249673 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:27:12 crc kubenswrapper[4996]: I0228 09:27:12.249737 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:27:12 crc kubenswrapper[4996]: I0228 09:27:12.250913 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:27:12 crc kubenswrapper[4996]: I0228 09:27:12.250985 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" gracePeriod=600 Feb 28 09:27:12 crc kubenswrapper[4996]: E0228 09:27:12.377792 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:27:13 crc kubenswrapper[4996]: I0228 09:27:13.354776 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" exitCode=0 Feb 28 09:27:13 crc kubenswrapper[4996]: I0228 09:27:13.354955 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e"} Feb 28 09:27:13 crc kubenswrapper[4996]: I0228 09:27:13.355174 4996 scope.go:117] "RemoveContainer" containerID="0d74e0665ce63a7b1e3ccb10e05382d63c764d169c6c0125d3275a4454729a94" Feb 28 09:27:13 crc kubenswrapper[4996]: I0228 09:27:13.355887 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:27:13 crc kubenswrapper[4996]: E0228 09:27:13.356244 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.036359 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:27:25 crc kubenswrapper[4996]: E0228 09:27:25.037510 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.925762 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxs8n"] Feb 28 09:27:25 crc kubenswrapper[4996]: E0228 09:27:25.926680 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66be9fb-71a8-4d58-8a10-1cbba0fe325f" containerName="oc" Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.926722 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66be9fb-71a8-4d58-8a10-1cbba0fe325f" containerName="oc" Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.927287 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66be9fb-71a8-4d58-8a10-1cbba0fe325f" containerName="oc" Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.931268 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.941997 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxs8n"] Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.959471 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-catalog-content\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.960133 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-utilities\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:25 crc kubenswrapper[4996]: I0228 09:27:25.960187 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfwqv\" (UniqueName: \"kubernetes.io/projected/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-kube-api-access-vfwqv\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:26 crc kubenswrapper[4996]: I0228 09:27:26.061597 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-catalog-content\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:26 crc kubenswrapper[4996]: I0228 09:27:26.061752 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-utilities\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:26 crc kubenswrapper[4996]: I0228 09:27:26.061783 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfwqv\" (UniqueName: \"kubernetes.io/projected/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-kube-api-access-vfwqv\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:26 crc kubenswrapper[4996]: I0228 09:27:26.062268 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-catalog-content\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:26 crc kubenswrapper[4996]: I0228 09:27:26.062469 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-utilities\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:26 crc kubenswrapper[4996]: I0228 09:27:26.088454 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfwqv\" (UniqueName: \"kubernetes.io/projected/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-kube-api-access-vfwqv\") pod \"redhat-marketplace-lxs8n\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:26 crc kubenswrapper[4996]: I0228 09:27:26.261492 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:26 crc kubenswrapper[4996]: I0228 09:27:26.704285 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxs8n"] Feb 28 09:27:27 crc kubenswrapper[4996]: I0228 09:27:27.524331 4996 generic.go:334] "Generic (PLEG): container finished" podID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerID="0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48" exitCode=0 Feb 28 09:27:27 crc kubenswrapper[4996]: I0228 09:27:27.524434 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxs8n" event={"ID":"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb","Type":"ContainerDied","Data":"0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48"} Feb 28 09:27:27 crc kubenswrapper[4996]: I0228 09:27:27.524630 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxs8n" event={"ID":"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb","Type":"ContainerStarted","Data":"8ac02229f976291926dd6a1d12a3812609e95b4fbd8a6e6581bd5ec1cb1cc9ef"} Feb 28 09:27:28 crc kubenswrapper[4996]: I0228 09:27:28.535416 4996 generic.go:334] "Generic (PLEG): container finished" podID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerID="3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f" exitCode=0 Feb 28 09:27:28 crc kubenswrapper[4996]: I0228 09:27:28.535966 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxs8n" event={"ID":"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb","Type":"ContainerDied","Data":"3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f"} Feb 28 09:27:29 crc kubenswrapper[4996]: I0228 09:27:29.545312 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxs8n" event={"ID":"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb","Type":"ContainerStarted","Data":"47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220"} Feb 28 09:27:29 crc kubenswrapper[4996]: I0228 09:27:29.578032 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxs8n" podStartSLOduration=3.137627719 podStartE2EDuration="4.577993056s" podCreationTimestamp="2026-02-28 09:27:25 +0000 UTC" firstStartedPulling="2026-02-28 09:27:27.527196849 +0000 UTC m=+1611.217999700" lastFinishedPulling="2026-02-28 09:27:28.967562216 +0000 UTC m=+1612.658365037" observedRunningTime="2026-02-28 09:27:29.573560029 +0000 UTC m=+1613.264362840" watchObservedRunningTime="2026-02-28 09:27:29.577993056 +0000 UTC m=+1613.268795867" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.306229 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rxcm2"] Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.310233 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.319384 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxcm2"] Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.380252 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-catalog-content\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.380324 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zh99\" (UniqueName: \"kubernetes.io/projected/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-kube-api-access-6zh99\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.380548 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-utilities\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.482640 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zh99\" (UniqueName: \"kubernetes.io/projected/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-kube-api-access-6zh99\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.482790 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-utilities\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.482857 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-catalog-content\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.483361 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-catalog-content\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.483488 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-utilities\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.503801 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zh99\" (UniqueName: \"kubernetes.io/projected/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-kube-api-access-6zh99\") pod \"community-operators-rxcm2\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:32 crc kubenswrapper[4996]: I0228 09:27:32.635443 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:33 crc kubenswrapper[4996]: I0228 09:27:33.227415 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rxcm2"] Feb 28 09:27:33 crc kubenswrapper[4996]: I0228 09:27:33.587722 4996 generic.go:334] "Generic (PLEG): container finished" podID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerID="a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa" exitCode=0 Feb 28 09:27:33 crc kubenswrapper[4996]: I0228 09:27:33.587760 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxcm2" event={"ID":"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb","Type":"ContainerDied","Data":"a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa"} Feb 28 09:27:33 crc kubenswrapper[4996]: I0228 09:27:33.587783 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxcm2" event={"ID":"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb","Type":"ContainerStarted","Data":"ae2dff2cc940c6bbd9cd1d579d77344cd93bb83a3ad3c504a343761dc4a8605f"} Feb 28 09:27:34 crc kubenswrapper[4996]: I0228 09:27:34.596076 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxcm2" event={"ID":"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb","Type":"ContainerStarted","Data":"e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79"} Feb 28 09:27:35 crc kubenswrapper[4996]: I0228 09:27:35.607898 4996 generic.go:334] "Generic (PLEG): container finished" podID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerID="e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79" exitCode=0 Feb 28 09:27:35 crc kubenswrapper[4996]: I0228 09:27:35.607973 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxcm2" event={"ID":"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb","Type":"ContainerDied","Data":"e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79"} Feb 28 09:27:36 crc kubenswrapper[4996]: I0228 09:27:36.261925 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:36 crc kubenswrapper[4996]: I0228 09:27:36.262254 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:36 crc kubenswrapper[4996]: I0228 09:27:36.320263 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:36 crc kubenswrapper[4996]: I0228 09:27:36.621922 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxcm2" event={"ID":"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb","Type":"ContainerStarted","Data":"c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a"} Feb 28 09:27:36 crc kubenswrapper[4996]: I0228 09:27:36.655908 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rxcm2" podStartSLOduration=1.884780168 podStartE2EDuration="4.655890953s" podCreationTimestamp="2026-02-28 09:27:32 +0000 UTC" firstStartedPulling="2026-02-28 09:27:33.589918265 +0000 UTC m=+1617.280721076" lastFinishedPulling="2026-02-28 09:27:36.36102904 +0000 UTC m=+1620.051831861" observedRunningTime="2026-02-28 09:27:36.654664873 +0000 UTC m=+1620.345467704" watchObservedRunningTime="2026-02-28 09:27:36.655890953 +0000 UTC m=+1620.346693764" Feb 28 09:27:36 crc kubenswrapper[4996]: I0228 09:27:36.698646 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:38 crc kubenswrapper[4996]: I0228 09:27:38.691709 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxs8n"] Feb 28 09:27:39 crc kubenswrapper[4996]: I0228 09:27:39.033253 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:27:39 crc kubenswrapper[4996]: E0228 09:27:39.033792 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:27:39 crc kubenswrapper[4996]: I0228 09:27:39.658401 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxs8n" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerName="registry-server" containerID="cri-o://47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220" gracePeriod=2 Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.195114 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.353901 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-catalog-content\") pod \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.354381 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfwqv\" (UniqueName: \"kubernetes.io/projected/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-kube-api-access-vfwqv\") pod \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.354682 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-utilities\") pod \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\" (UID: \"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb\") " Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.355291 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-utilities" (OuterVolumeSpecName: "utilities") pod "b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" (UID: "b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.362133 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-kube-api-access-vfwqv" (OuterVolumeSpecName: "kube-api-access-vfwqv") pod "b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" (UID: "b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb"). InnerVolumeSpecName "kube-api-access-vfwqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.399042 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" (UID: "b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.456946 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfwqv\" (UniqueName: \"kubernetes.io/projected/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-kube-api-access-vfwqv\") on node \"crc\" DevicePath \"\"" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.457363 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.457589 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.675720 4996 generic.go:334] "Generic (PLEG): container finished" podID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerID="47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220" exitCode=0 Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.675783 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxs8n" event={"ID":"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb","Type":"ContainerDied","Data":"47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220"} Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.675814 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxs8n" event={"ID":"b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb","Type":"ContainerDied","Data":"8ac02229f976291926dd6a1d12a3812609e95b4fbd8a6e6581bd5ec1cb1cc9ef"} Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.675835 4996 scope.go:117] "RemoveContainer" containerID="47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.675898 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxs8n" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.705864 4996 scope.go:117] "RemoveContainer" containerID="3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.730085 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxs8n"] Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.738516 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxs8n"] Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.746047 4996 scope.go:117] "RemoveContainer" containerID="0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.809352 4996 scope.go:117] "RemoveContainer" containerID="47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220" Feb 28 09:27:40 crc kubenswrapper[4996]: E0228 09:27:40.818241 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220\": container with ID starting with 47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220 not found: ID does not exist" containerID="47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.818433 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220"} err="failed to get container status \"47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220\": rpc error: code = NotFound desc = could not find container \"47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220\": container with ID starting with 47c66cc227404ca85ac7b0e7d0caf67b220eb288cb5f67a3f09a8a28aab74220 not found: ID does not exist" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.818512 4996 scope.go:117] "RemoveContainer" containerID="3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f" Feb 28 09:27:40 crc kubenswrapper[4996]: E0228 09:27:40.819075 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f\": container with ID starting with 3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f not found: ID does not exist" containerID="3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.819204 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f"} err="failed to get container status \"3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f\": rpc error: code = NotFound desc = could not find container \"3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f\": container with ID starting with 3fe96f34f9185440db7b13c42f194e399a6ff940366a996d3f9808232446951f not found: ID does not exist" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.819293 4996 scope.go:117] "RemoveContainer" containerID="0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48" Feb 28 09:27:40 crc kubenswrapper[4996]: E0228 09:27:40.819680 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48\": container with ID starting with 0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48 not found: ID does not exist" containerID="0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48" Feb 28 09:27:40 crc kubenswrapper[4996]: I0228 09:27:40.819775 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48"} err="failed to get container status \"0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48\": rpc error: code = NotFound desc = could not find container \"0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48\": container with ID starting with 0b393c09b4cd0cce5ed718b3c44a69aa009aa77353afd07d571ee5a3f11e0c48 not found: ID does not exist" Feb 28 09:27:41 crc kubenswrapper[4996]: I0228 09:27:41.049340 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" path="/var/lib/kubelet/pods/b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb/volumes" Feb 28 09:27:42 crc kubenswrapper[4996]: I0228 09:27:42.636105 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:42 crc kubenswrapper[4996]: I0228 09:27:42.636494 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:42 crc kubenswrapper[4996]: I0228 09:27:42.707564 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:42 crc kubenswrapper[4996]: I0228 09:27:42.779678 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:43 crc kubenswrapper[4996]: I0228 09:27:43.680314 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxcm2"] Feb 28 09:27:44 crc kubenswrapper[4996]: I0228 09:27:44.721767 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rxcm2" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerName="registry-server" containerID="cri-o://c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a" gracePeriod=2 Feb 28 09:27:44 crc kubenswrapper[4996]: E0228 09:27:44.871782 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca8b39c3_1ee7_4734_ac29_5f6b7d4a0eeb.slice/crio-c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca8b39c3_1ee7_4734_ac29_5f6b7d4a0eeb.slice/crio-conmon-c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a.scope\": RecentStats: unable to find data in memory cache]" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.261586 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.352734 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zh99\" (UniqueName: \"kubernetes.io/projected/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-kube-api-access-6zh99\") pod \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.353446 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-catalog-content\") pod \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.353663 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-utilities\") pod \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\" (UID: \"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb\") " Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.355541 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-utilities" (OuterVolumeSpecName: "utilities") pod "ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" (UID: "ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.359112 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-kube-api-access-6zh99" (OuterVolumeSpecName: "kube-api-access-6zh99") pod "ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" (UID: "ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb"). InnerVolumeSpecName "kube-api-access-6zh99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.456235 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.456276 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zh99\" (UniqueName: \"kubernetes.io/projected/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-kube-api-access-6zh99\") on node \"crc\" DevicePath \"\"" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.739512 4996 generic.go:334] "Generic (PLEG): container finished" podID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerID="c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a" exitCode=0 Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.739605 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxcm2" event={"ID":"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb","Type":"ContainerDied","Data":"c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a"} Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.739680 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rxcm2" event={"ID":"ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb","Type":"ContainerDied","Data":"ae2dff2cc940c6bbd9cd1d579d77344cd93bb83a3ad3c504a343761dc4a8605f"} Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.739745 4996 scope.go:117] "RemoveContainer" containerID="c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.740158 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rxcm2" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.772717 4996 scope.go:117] "RemoveContainer" containerID="e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.806054 4996 scope.go:117] "RemoveContainer" containerID="a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.873983 4996 scope.go:117] "RemoveContainer" containerID="c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a" Feb 28 09:27:45 crc kubenswrapper[4996]: E0228 09:27:45.874516 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a\": container with ID starting with c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a not found: ID does not exist" containerID="c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.874566 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a"} err="failed to get container status \"c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a\": rpc error: code = NotFound desc = could not find container \"c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a\": container with ID starting with c76b08af3133f6018b7ea8c79db293249cbdc9c27297dbd1b9d87a894aaafc4a not found: ID does not exist" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.874594 4996 scope.go:117] "RemoveContainer" containerID="e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79" Feb 28 09:27:45 crc kubenswrapper[4996]: E0228 09:27:45.875211 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79\": container with ID starting with e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79 not found: ID does not exist" containerID="e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.875266 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79"} err="failed to get container status \"e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79\": rpc error: code = NotFound desc = could not find container \"e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79\": container with ID starting with e07673cecb7b4dbcd9339786b5591949e5f3307c79aa59d4beda307f56326e79 not found: ID does not exist" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.875300 4996 scope.go:117] "RemoveContainer" containerID="a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa" Feb 28 09:27:45 crc kubenswrapper[4996]: E0228 09:27:45.875565 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa\": container with ID starting with a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa not found: ID does not exist" containerID="a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa" Feb 28 09:27:45 crc kubenswrapper[4996]: I0228 09:27:45.875599 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa"} err="failed to get container status \"a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa\": rpc error: code = NotFound desc = could not find container \"a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa\": container with ID starting with a3e82719359b17c3af0b4140505737eb50f3a82633ba1e920391510a557cc3aa not found: ID does not exist" Feb 28 09:27:46 crc kubenswrapper[4996]: I0228 09:27:46.083298 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" (UID: "ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:27:46 crc kubenswrapper[4996]: I0228 09:27:46.172659 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:27:46 crc kubenswrapper[4996]: I0228 09:27:46.412929 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rxcm2"] Feb 28 09:27:46 crc kubenswrapper[4996]: I0228 09:27:46.430966 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rxcm2"] Feb 28 09:27:47 crc kubenswrapper[4996]: I0228 09:27:47.055735 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" path="/var/lib/kubelet/pods/ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb/volumes" Feb 28 09:27:53 crc kubenswrapper[4996]: I0228 09:27:53.033760 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:27:53 crc kubenswrapper[4996]: E0228 09:27:53.034583 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.144191 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537848-sdf7l"] Feb 28 09:28:00 crc kubenswrapper[4996]: E0228 09:28:00.145352 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerName="extract-content" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.145375 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerName="extract-content" Feb 28 09:28:00 crc kubenswrapper[4996]: E0228 09:28:00.145405 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerName="extract-content" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.145415 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerName="extract-content" Feb 28 09:28:00 crc kubenswrapper[4996]: E0228 09:28:00.145437 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerName="extract-utilities" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.145449 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerName="extract-utilities" Feb 28 09:28:00 crc kubenswrapper[4996]: E0228 09:28:00.145466 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerName="registry-server" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.145477 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerName="registry-server" Feb 28 09:28:00 crc kubenswrapper[4996]: E0228 09:28:00.145496 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerName="registry-server" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.145506 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerName="registry-server" Feb 28 09:28:00 crc kubenswrapper[4996]: E0228 09:28:00.145534 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerName="extract-utilities" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.145545 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerName="extract-utilities" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.145860 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8b39c3-1ee7-4734-ac29-5f6b7d4a0eeb" containerName="registry-server" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.145896 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5d6eeb9-c34e-48ab-b339-3f4350ebf9bb" containerName="registry-server" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.146976 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537848-sdf7l" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.149459 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.152133 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.152382 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.153653 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537848-sdf7l"] Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.199970 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfwpc\" (UniqueName: \"kubernetes.io/projected/42239bf7-0b90-45ed-9624-4e1e4016d118-kube-api-access-xfwpc\") pod \"auto-csr-approver-29537848-sdf7l\" (UID: \"42239bf7-0b90-45ed-9624-4e1e4016d118\") " pod="openshift-infra/auto-csr-approver-29537848-sdf7l" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.302910 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfwpc\" (UniqueName: \"kubernetes.io/projected/42239bf7-0b90-45ed-9624-4e1e4016d118-kube-api-access-xfwpc\") pod \"auto-csr-approver-29537848-sdf7l\" (UID: \"42239bf7-0b90-45ed-9624-4e1e4016d118\") " pod="openshift-infra/auto-csr-approver-29537848-sdf7l" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.322237 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfwpc\" (UniqueName: \"kubernetes.io/projected/42239bf7-0b90-45ed-9624-4e1e4016d118-kube-api-access-xfwpc\") pod \"auto-csr-approver-29537848-sdf7l\" (UID: \"42239bf7-0b90-45ed-9624-4e1e4016d118\") " pod="openshift-infra/auto-csr-approver-29537848-sdf7l" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.466382 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537848-sdf7l" Feb 28 09:28:00 crc kubenswrapper[4996]: I0228 09:28:00.926114 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537848-sdf7l"] Feb 28 09:28:01 crc kubenswrapper[4996]: I0228 09:28:01.912772 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537848-sdf7l" event={"ID":"42239bf7-0b90-45ed-9624-4e1e4016d118","Type":"ContainerStarted","Data":"b9352ae3e2bdd2288daa69b933bef13fadaef02c85ce25954e35e1cb87aa7d76"} Feb 28 09:28:02 crc kubenswrapper[4996]: I0228 09:28:02.927878 4996 generic.go:334] "Generic (PLEG): container finished" podID="42239bf7-0b90-45ed-9624-4e1e4016d118" containerID="c0c32b8edb9eacff2735b0b87ed9522d39ce55675569ff5a1888306db7c967f8" exitCode=0 Feb 28 09:28:02 crc kubenswrapper[4996]: I0228 09:28:02.928068 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537848-sdf7l" event={"ID":"42239bf7-0b90-45ed-9624-4e1e4016d118","Type":"ContainerDied","Data":"c0c32b8edb9eacff2735b0b87ed9522d39ce55675569ff5a1888306db7c967f8"} Feb 28 09:28:04 crc kubenswrapper[4996]: I0228 09:28:04.258956 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537848-sdf7l" Feb 28 09:28:04 crc kubenswrapper[4996]: I0228 09:28:04.287115 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfwpc\" (UniqueName: \"kubernetes.io/projected/42239bf7-0b90-45ed-9624-4e1e4016d118-kube-api-access-xfwpc\") pod \"42239bf7-0b90-45ed-9624-4e1e4016d118\" (UID: \"42239bf7-0b90-45ed-9624-4e1e4016d118\") " Feb 28 09:28:04 crc kubenswrapper[4996]: I0228 09:28:04.293425 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42239bf7-0b90-45ed-9624-4e1e4016d118-kube-api-access-xfwpc" (OuterVolumeSpecName: "kube-api-access-xfwpc") pod "42239bf7-0b90-45ed-9624-4e1e4016d118" (UID: "42239bf7-0b90-45ed-9624-4e1e4016d118"). InnerVolumeSpecName "kube-api-access-xfwpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:28:04 crc kubenswrapper[4996]: I0228 09:28:04.389154 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfwpc\" (UniqueName: \"kubernetes.io/projected/42239bf7-0b90-45ed-9624-4e1e4016d118-kube-api-access-xfwpc\") on node \"crc\" DevicePath \"\"" Feb 28 09:28:04 crc kubenswrapper[4996]: I0228 09:28:04.971185 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537848-sdf7l" event={"ID":"42239bf7-0b90-45ed-9624-4e1e4016d118","Type":"ContainerDied","Data":"b9352ae3e2bdd2288daa69b933bef13fadaef02c85ce25954e35e1cb87aa7d76"} Feb 28 09:28:04 crc kubenswrapper[4996]: I0228 09:28:04.971665 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9352ae3e2bdd2288daa69b933bef13fadaef02c85ce25954e35e1cb87aa7d76" Feb 28 09:28:04 crc kubenswrapper[4996]: I0228 09:28:04.971295 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537848-sdf7l" Feb 28 09:28:05 crc kubenswrapper[4996]: I0228 09:28:05.335159 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537842-lvg4x"] Feb 28 09:28:05 crc kubenswrapper[4996]: I0228 09:28:05.344527 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537842-lvg4x"] Feb 28 09:28:06 crc kubenswrapper[4996]: I0228 09:28:06.034254 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:28:06 crc kubenswrapper[4996]: E0228 09:28:06.035141 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:28:07 crc kubenswrapper[4996]: I0228 09:28:07.047820 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b3aa079-adbe-4f89-a7cb-7cece7b04a9d" path="/var/lib/kubelet/pods/9b3aa079-adbe-4f89-a7cb-7cece7b04a9d/volumes" Feb 28 09:28:19 crc kubenswrapper[4996]: I0228 09:28:19.033215 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:28:19 crc kubenswrapper[4996]: E0228 09:28:19.033922 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:28:20 crc kubenswrapper[4996]: I0228 09:28:20.118149 4996 generic.go:334] "Generic (PLEG): container finished" podID="17f535e2-b6a2-40de-bf63-160b7aeb3b70" containerID="05619ef064203ce8ae9d29be8ef07bc7f52c07f17927f3e11b01f490217c1834" exitCode=0 Feb 28 09:28:20 crc kubenswrapper[4996]: I0228 09:28:20.118234 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" event={"ID":"17f535e2-b6a2-40de-bf63-160b7aeb3b70","Type":"ContainerDied","Data":"05619ef064203ce8ae9d29be8ef07bc7f52c07f17927f3e11b01f490217c1834"} Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.567867 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.628991 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-bootstrap-combined-ca-bundle\") pod \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.629129 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-inventory\") pod \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.629173 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-ssh-key-openstack-edpm-ipam\") pod \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.629203 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mptvg\" (UniqueName: \"kubernetes.io/projected/17f535e2-b6a2-40de-bf63-160b7aeb3b70-kube-api-access-mptvg\") pod \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\" (UID: \"17f535e2-b6a2-40de-bf63-160b7aeb3b70\") " Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.636167 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "17f535e2-b6a2-40de-bf63-160b7aeb3b70" (UID: "17f535e2-b6a2-40de-bf63-160b7aeb3b70"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.636335 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17f535e2-b6a2-40de-bf63-160b7aeb3b70-kube-api-access-mptvg" (OuterVolumeSpecName: "kube-api-access-mptvg") pod "17f535e2-b6a2-40de-bf63-160b7aeb3b70" (UID: "17f535e2-b6a2-40de-bf63-160b7aeb3b70"). InnerVolumeSpecName "kube-api-access-mptvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.655776 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "17f535e2-b6a2-40de-bf63-160b7aeb3b70" (UID: "17f535e2-b6a2-40de-bf63-160b7aeb3b70"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.661763 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-inventory" (OuterVolumeSpecName: "inventory") pod "17f535e2-b6a2-40de-bf63-160b7aeb3b70" (UID: "17f535e2-b6a2-40de-bf63-160b7aeb3b70"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.730234 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.730273 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.730287 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mptvg\" (UniqueName: \"kubernetes.io/projected/17f535e2-b6a2-40de-bf63-160b7aeb3b70-kube-api-access-mptvg\") on node \"crc\" DevicePath \"\"" Feb 28 09:28:21 crc kubenswrapper[4996]: I0228 09:28:21.730299 4996 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f535e2-b6a2-40de-bf63-160b7aeb3b70-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.141581 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" event={"ID":"17f535e2-b6a2-40de-bf63-160b7aeb3b70","Type":"ContainerDied","Data":"1aba80f2842b5bdf1d4c8341dad4a1596b7dea227fe3a32aad792d0fb25480d2"} Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.141633 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aba80f2842b5bdf1d4c8341dad4a1596b7dea227fe3a32aad792d0fb25480d2" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.141638 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.264502 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96"] Feb 28 09:28:22 crc kubenswrapper[4996]: E0228 09:28:22.264819 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17f535e2-b6a2-40de-bf63-160b7aeb3b70" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.264834 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="17f535e2-b6a2-40de-bf63-160b7aeb3b70" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 09:28:22 crc kubenswrapper[4996]: E0228 09:28:22.264865 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42239bf7-0b90-45ed-9624-4e1e4016d118" containerName="oc" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.264874 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="42239bf7-0b90-45ed-9624-4e1e4016d118" containerName="oc" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.265034 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="42239bf7-0b90-45ed-9624-4e1e4016d118" containerName="oc" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.265053 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="17f535e2-b6a2-40de-bf63-160b7aeb3b70" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.265607 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.268437 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.268607 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.268769 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.269115 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.287125 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96"] Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.340809 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vb96\" (UniqueName: \"kubernetes.io/projected/f7e9c1df-d012-45b5-8315-0c8c14d680d2-kube-api-access-8vb96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.341154 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.341208 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.442775 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vb96\" (UniqueName: \"kubernetes.io/projected/f7e9c1df-d012-45b5-8315-0c8c14d680d2-kube-api-access-8vb96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.443279 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.444130 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.451275 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.451435 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.480249 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vb96\" (UniqueName: \"kubernetes.io/projected/f7e9c1df-d012-45b5-8315-0c8c14d680d2-kube-api-access-8vb96\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-znw96\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:22 crc kubenswrapper[4996]: I0228 09:28:22.587129 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:28:23 crc kubenswrapper[4996]: I0228 09:28:23.244094 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96"] Feb 28 09:28:24 crc kubenswrapper[4996]: I0228 09:28:24.168281 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" event={"ID":"f7e9c1df-d012-45b5-8315-0c8c14d680d2","Type":"ContainerStarted","Data":"d3fe8171d55bab7cffdc92f1b7f5fb879812ea41dd09703e106477c649f33c70"} Feb 28 09:28:24 crc kubenswrapper[4996]: I0228 09:28:24.168668 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" event={"ID":"f7e9c1df-d012-45b5-8315-0c8c14d680d2","Type":"ContainerStarted","Data":"611936709a44180f67ffce89aba52125dffe5b2fa26e5443e365190978d0c0c1"} Feb 28 09:28:24 crc kubenswrapper[4996]: I0228 09:28:24.187658 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" podStartSLOduration=1.807739766 podStartE2EDuration="2.187632714s" podCreationTimestamp="2026-02-28 09:28:22 +0000 UTC" firstStartedPulling="2026-02-28 09:28:23.248478827 +0000 UTC m=+1666.939281638" lastFinishedPulling="2026-02-28 09:28:23.628371745 +0000 UTC m=+1667.319174586" observedRunningTime="2026-02-28 09:28:24.184268123 +0000 UTC m=+1667.875070964" watchObservedRunningTime="2026-02-28 09:28:24.187632714 +0000 UTC m=+1667.878435535" Feb 28 09:28:33 crc kubenswrapper[4996]: I0228 09:28:33.033622 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:28:33 crc kubenswrapper[4996]: E0228 09:28:33.034543 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:28:41 crc kubenswrapper[4996]: I0228 09:28:41.523763 4996 scope.go:117] "RemoveContainer" containerID="b8e0af30f13c87d932e02a24fd301666746b44c74e03bad427b1c2bdf7b48bf9" Feb 28 09:28:44 crc kubenswrapper[4996]: I0228 09:28:44.033311 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:28:44 crc kubenswrapper[4996]: E0228 09:28:44.033965 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:28:56 crc kubenswrapper[4996]: I0228 09:28:56.034095 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:28:56 crc kubenswrapper[4996]: E0228 09:28:56.034807 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:29:10 crc kubenswrapper[4996]: I0228 09:29:10.033791 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:29:10 crc kubenswrapper[4996]: E0228 09:29:10.034924 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:29:25 crc kubenswrapper[4996]: I0228 09:29:25.033962 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:29:25 crc kubenswrapper[4996]: E0228 09:29:25.035146 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:29:32 crc kubenswrapper[4996]: I0228 09:29:32.874168 4996 generic.go:334] "Generic (PLEG): container finished" podID="f7e9c1df-d012-45b5-8315-0c8c14d680d2" containerID="d3fe8171d55bab7cffdc92f1b7f5fb879812ea41dd09703e106477c649f33c70" exitCode=0 Feb 28 09:29:32 crc kubenswrapper[4996]: I0228 09:29:32.874297 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" event={"ID":"f7e9c1df-d012-45b5-8315-0c8c14d680d2","Type":"ContainerDied","Data":"d3fe8171d55bab7cffdc92f1b7f5fb879812ea41dd09703e106477c649f33c70"} Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.312346 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.495662 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-inventory\") pod \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.495865 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vb96\" (UniqueName: \"kubernetes.io/projected/f7e9c1df-d012-45b5-8315-0c8c14d680d2-kube-api-access-8vb96\") pod \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.495918 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-ssh-key-openstack-edpm-ipam\") pod \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\" (UID: \"f7e9c1df-d012-45b5-8315-0c8c14d680d2\") " Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.502125 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e9c1df-d012-45b5-8315-0c8c14d680d2-kube-api-access-8vb96" (OuterVolumeSpecName: "kube-api-access-8vb96") pod "f7e9c1df-d012-45b5-8315-0c8c14d680d2" (UID: "f7e9c1df-d012-45b5-8315-0c8c14d680d2"). InnerVolumeSpecName "kube-api-access-8vb96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.530710 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7e9c1df-d012-45b5-8315-0c8c14d680d2" (UID: "f7e9c1df-d012-45b5-8315-0c8c14d680d2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.539162 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-inventory" (OuterVolumeSpecName: "inventory") pod "f7e9c1df-d012-45b5-8315-0c8c14d680d2" (UID: "f7e9c1df-d012-45b5-8315-0c8c14d680d2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.598188 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.598226 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7e9c1df-d012-45b5-8315-0c8c14d680d2-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.598236 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vb96\" (UniqueName: \"kubernetes.io/projected/f7e9c1df-d012-45b5-8315-0c8c14d680d2-kube-api-access-8vb96\") on node \"crc\" DevicePath \"\"" Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.898442 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" event={"ID":"f7e9c1df-d012-45b5-8315-0c8c14d680d2","Type":"ContainerDied","Data":"611936709a44180f67ffce89aba52125dffe5b2fa26e5443e365190978d0c0c1"} Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.898483 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611936709a44180f67ffce89aba52125dffe5b2fa26e5443e365190978d0c0c1" Feb 28 09:29:34 crc kubenswrapper[4996]: I0228 09:29:34.898544 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.016887 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp"] Feb 28 09:29:35 crc kubenswrapper[4996]: E0228 09:29:35.017353 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e9c1df-d012-45b5-8315-0c8c14d680d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.017372 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e9c1df-d012-45b5-8315-0c8c14d680d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.017623 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e9c1df-d012-45b5-8315-0c8c14d680d2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.018404 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.022691 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.031600 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.032750 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.033617 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.077967 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp"] Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.207972 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.208234 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.208367 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x8x5\" (UniqueName: \"kubernetes.io/projected/738a9376-79b4-4611-b57c-baf13a1899fd-kube-api-access-8x8x5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.310346 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x8x5\" (UniqueName: \"kubernetes.io/projected/738a9376-79b4-4611-b57c-baf13a1899fd-kube-api-access-8x8x5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.310560 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.310606 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.316426 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.316785 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.330812 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x8x5\" (UniqueName: \"kubernetes.io/projected/738a9376-79b4-4611-b57c-baf13a1899fd-kube-api-access-8x8x5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-f9czp\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.341090 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:35 crc kubenswrapper[4996]: I0228 09:29:35.925292 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp"] Feb 28 09:29:36 crc kubenswrapper[4996]: I0228 09:29:36.918124 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" event={"ID":"738a9376-79b4-4611-b57c-baf13a1899fd","Type":"ContainerStarted","Data":"158379b8007b9195d272c65da092c766ed4296f7fc08dae685731b1c77d4ea5c"} Feb 28 09:29:36 crc kubenswrapper[4996]: I0228 09:29:36.918188 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" event={"ID":"738a9376-79b4-4611-b57c-baf13a1899fd","Type":"ContainerStarted","Data":"700c6d3e6f5936888791710fb4bff835f84417db66b3d829afefacb48deef138"} Feb 28 09:29:36 crc kubenswrapper[4996]: I0228 09:29:36.948792 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" podStartSLOduration=2.536259841 podStartE2EDuration="2.948769229s" podCreationTimestamp="2026-02-28 09:29:34 +0000 UTC" firstStartedPulling="2026-02-28 09:29:35.929131141 +0000 UTC m=+1739.619933952" lastFinishedPulling="2026-02-28 09:29:36.341640529 +0000 UTC m=+1740.032443340" observedRunningTime="2026-02-28 09:29:36.942946288 +0000 UTC m=+1740.633749109" watchObservedRunningTime="2026-02-28 09:29:36.948769229 +0000 UTC m=+1740.639572050" Feb 28 09:29:37 crc kubenswrapper[4996]: I0228 09:29:37.050479 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:29:37 crc kubenswrapper[4996]: E0228 09:29:37.051179 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:29:41 crc kubenswrapper[4996]: I0228 09:29:41.650346 4996 scope.go:117] "RemoveContainer" containerID="861bf2f5723fcc3cabceb1af29dfdf2c89b37e0b6070d6b3e8ab87702cd7f064" Feb 28 09:29:41 crc kubenswrapper[4996]: I0228 09:29:41.684182 4996 scope.go:117] "RemoveContainer" containerID="87df899efa90f6097084935a1a353980cfcee8de85bb647c1e68f59e5224af31" Feb 28 09:29:41 crc kubenswrapper[4996]: I0228 09:29:41.972880 4996 generic.go:334] "Generic (PLEG): container finished" podID="738a9376-79b4-4611-b57c-baf13a1899fd" containerID="158379b8007b9195d272c65da092c766ed4296f7fc08dae685731b1c77d4ea5c" exitCode=0 Feb 28 09:29:41 crc kubenswrapper[4996]: I0228 09:29:41.972957 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" event={"ID":"738a9376-79b4-4611-b57c-baf13a1899fd","Type":"ContainerDied","Data":"158379b8007b9195d272c65da092c766ed4296f7fc08dae685731b1c77d4ea5c"} Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.422459 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.591041 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-ssh-key-openstack-edpm-ipam\") pod \"738a9376-79b4-4611-b57c-baf13a1899fd\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.591251 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-inventory\") pod \"738a9376-79b4-4611-b57c-baf13a1899fd\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.591433 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x8x5\" (UniqueName: \"kubernetes.io/projected/738a9376-79b4-4611-b57c-baf13a1899fd-kube-api-access-8x8x5\") pod \"738a9376-79b4-4611-b57c-baf13a1899fd\" (UID: \"738a9376-79b4-4611-b57c-baf13a1899fd\") " Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.599249 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738a9376-79b4-4611-b57c-baf13a1899fd-kube-api-access-8x8x5" (OuterVolumeSpecName: "kube-api-access-8x8x5") pod "738a9376-79b4-4611-b57c-baf13a1899fd" (UID: "738a9376-79b4-4611-b57c-baf13a1899fd"). InnerVolumeSpecName "kube-api-access-8x8x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.639506 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "738a9376-79b4-4611-b57c-baf13a1899fd" (UID: "738a9376-79b4-4611-b57c-baf13a1899fd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.640200 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-inventory" (OuterVolumeSpecName: "inventory") pod "738a9376-79b4-4611-b57c-baf13a1899fd" (UID: "738a9376-79b4-4611-b57c-baf13a1899fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.694489 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.694544 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/738a9376-79b4-4611-b57c-baf13a1899fd-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:29:43 crc kubenswrapper[4996]: I0228 09:29:43.694564 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x8x5\" (UniqueName: \"kubernetes.io/projected/738a9376-79b4-4611-b57c-baf13a1899fd-kube-api-access-8x8x5\") on node \"crc\" DevicePath \"\"" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.003674 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" event={"ID":"738a9376-79b4-4611-b57c-baf13a1899fd","Type":"ContainerDied","Data":"700c6d3e6f5936888791710fb4bff835f84417db66b3d829afefacb48deef138"} Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.003975 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="700c6d3e6f5936888791710fb4bff835f84417db66b3d829afefacb48deef138" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.003848 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.091449 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w"] Feb 28 09:29:44 crc kubenswrapper[4996]: E0228 09:29:44.091910 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738a9376-79b4-4611-b57c-baf13a1899fd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.091933 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="738a9376-79b4-4611-b57c-baf13a1899fd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.092150 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="738a9376-79b4-4611-b57c-baf13a1899fd" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.092857 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.094439 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.095176 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.095377 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.095451 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.111478 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w"] Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.203678 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.203748 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.204412 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2w9\" (UniqueName: \"kubernetes.io/projected/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-kube-api-access-wt2w9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.305841 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2w9\" (UniqueName: \"kubernetes.io/projected/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-kube-api-access-wt2w9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.305996 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.306801 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.311117 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.311840 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.323702 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2w9\" (UniqueName: \"kubernetes.io/projected/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-kube-api-access-wt2w9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-2kc8w\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.409123 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:29:44 crc kubenswrapper[4996]: I0228 09:29:44.969855 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w"] Feb 28 09:29:44 crc kubenswrapper[4996]: W0228 09:29:44.971812 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a28e9ec_760a_4b1a_93ec_bdca318ebe00.slice/crio-abd4d3aa4d1dfaa38212a2926f6c569e5b527beeac5ed2da85c7b5b375eb6218 WatchSource:0}: Error finding container abd4d3aa4d1dfaa38212a2926f6c569e5b527beeac5ed2da85c7b5b375eb6218: Status 404 returned error can't find the container with id abd4d3aa4d1dfaa38212a2926f6c569e5b527beeac5ed2da85c7b5b375eb6218 Feb 28 09:29:45 crc kubenswrapper[4996]: I0228 09:29:45.011329 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" event={"ID":"6a28e9ec-760a-4b1a-93ec-bdca318ebe00","Type":"ContainerStarted","Data":"abd4d3aa4d1dfaa38212a2926f6c569e5b527beeac5ed2da85c7b5b375eb6218"} Feb 28 09:29:46 crc kubenswrapper[4996]: I0228 09:29:46.023964 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" event={"ID":"6a28e9ec-760a-4b1a-93ec-bdca318ebe00","Type":"ContainerStarted","Data":"a19ee68511e5256776041fab6f42469d148fea0460b2048dc9920b795a9a45d5"} Feb 28 09:29:46 crc kubenswrapper[4996]: I0228 09:29:46.046459 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" podStartSLOduration=1.515422472 podStartE2EDuration="2.046429054s" podCreationTimestamp="2026-02-28 09:29:44 +0000 UTC" firstStartedPulling="2026-02-28 09:29:44.974818488 +0000 UTC m=+1748.665621289" lastFinishedPulling="2026-02-28 09:29:45.50582506 +0000 UTC m=+1749.196627871" observedRunningTime="2026-02-28 09:29:46.044809814 +0000 UTC m=+1749.735612665" watchObservedRunningTime="2026-02-28 09:29:46.046429054 +0000 UTC m=+1749.737231895" Feb 28 09:29:47 crc kubenswrapper[4996]: I0228 09:29:47.097288 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4d7d-account-create-update-rdmzh"] Feb 28 09:29:47 crc kubenswrapper[4996]: I0228 09:29:47.103242 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4d7d-account-create-update-rdmzh"] Feb 28 09:29:47 crc kubenswrapper[4996]: I0228 09:29:47.112190 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gtn9c"] Feb 28 09:29:47 crc kubenswrapper[4996]: I0228 09:29:47.119532 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gtn9c"] Feb 28 09:29:48 crc kubenswrapper[4996]: I0228 09:29:48.033833 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:29:48 crc kubenswrapper[4996]: E0228 09:29:48.034476 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:29:48 crc kubenswrapper[4996]: I0228 09:29:48.040247 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-df7d-account-create-update-ttm7k"] Feb 28 09:29:48 crc kubenswrapper[4996]: I0228 09:29:48.055950 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-df7d-account-create-update-ttm7k"] Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.081571 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a66f08-b7bc-4b5e-8e08-25d602c30e34" path="/var/lib/kubelet/pods/c5a66f08-b7bc-4b5e-8e08-25d602c30e34/volumes" Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.082523 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de95567a-4315-4687-9b8b-1a94bba6b4c4" path="/var/lib/kubelet/pods/de95567a-4315-4687-9b8b-1a94bba6b4c4/volumes" Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.083130 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edeb8336-9b74-47dd-acb8-22384803c2c6" path="/var/lib/kubelet/pods/edeb8336-9b74-47dd-acb8-22384803c2c6/volumes" Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.083666 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-lb9rn"] Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.083699 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-6fzk4"] Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.090696 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-56c8-account-create-update-wkft2"] Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.097437 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-6fzk4"] Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.104507 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-lb9rn"] Feb 28 09:29:49 crc kubenswrapper[4996]: I0228 09:29:49.112023 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-56c8-account-create-update-wkft2"] Feb 28 09:29:51 crc kubenswrapper[4996]: I0228 09:29:51.052465 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fcb493-2d66-4e54-a21a-bb6f84f68479" path="/var/lib/kubelet/pods/37fcb493-2d66-4e54-a21a-bb6f84f68479/volumes" Feb 28 09:29:51 crc kubenswrapper[4996]: I0228 09:29:51.053905 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a9977b0-368a-4fd5-997f-760640256681" path="/var/lib/kubelet/pods/3a9977b0-368a-4fd5-997f-760640256681/volumes" Feb 28 09:29:51 crc kubenswrapper[4996]: I0228 09:29:51.055291 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5fce9db-bc0e-4778-b0be-d08e1b8febcc" path="/var/lib/kubelet/pods/e5fce9db-bc0e-4778-b0be-d08e1b8febcc/volumes" Feb 28 09:29:54 crc kubenswrapper[4996]: I0228 09:29:54.059593 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cmc8x"] Feb 28 09:29:54 crc kubenswrapper[4996]: I0228 09:29:54.075039 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cmc8x"] Feb 28 09:29:55 crc kubenswrapper[4996]: I0228 09:29:55.045529 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40da46e-f40e-412f-a3ec-0218e11cd495" path="/var/lib/kubelet/pods/e40da46e-f40e-412f-a3ec-0218e11cd495/volumes" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.138963 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537850-rllwb"] Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.140724 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537850-rllwb" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.142980 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.143431 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.144297 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.152800 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl"] Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.155217 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.157871 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.158191 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.163603 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537850-rllwb"] Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.177692 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl"] Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.254180 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-secret-volume\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.254364 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ts7w\" (UniqueName: \"kubernetes.io/projected/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-kube-api-access-4ts7w\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.254405 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8k4z\" (UniqueName: \"kubernetes.io/projected/5026f9bd-e6a2-4b42-b7cb-eefed7e5a187-kube-api-access-x8k4z\") pod \"auto-csr-approver-29537850-rllwb\" (UID: \"5026f9bd-e6a2-4b42-b7cb-eefed7e5a187\") " pod="openshift-infra/auto-csr-approver-29537850-rllwb" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.254426 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-config-volume\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.355464 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8k4z\" (UniqueName: \"kubernetes.io/projected/5026f9bd-e6a2-4b42-b7cb-eefed7e5a187-kube-api-access-x8k4z\") pod \"auto-csr-approver-29537850-rllwb\" (UID: \"5026f9bd-e6a2-4b42-b7cb-eefed7e5a187\") " pod="openshift-infra/auto-csr-approver-29537850-rllwb" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.355507 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-config-volume\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.355587 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-secret-volume\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.355663 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ts7w\" (UniqueName: \"kubernetes.io/projected/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-kube-api-access-4ts7w\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.357124 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-config-volume\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.365751 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-secret-volume\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.373770 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8k4z\" (UniqueName: \"kubernetes.io/projected/5026f9bd-e6a2-4b42-b7cb-eefed7e5a187-kube-api-access-x8k4z\") pod \"auto-csr-approver-29537850-rllwb\" (UID: \"5026f9bd-e6a2-4b42-b7cb-eefed7e5a187\") " pod="openshift-infra/auto-csr-approver-29537850-rllwb" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.378020 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ts7w\" (UniqueName: \"kubernetes.io/projected/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-kube-api-access-4ts7w\") pod \"collect-profiles-29537850-s8bdl\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.502080 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537850-rllwb" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.509755 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.986170 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537850-rllwb"] Feb 28 09:30:00 crc kubenswrapper[4996]: I0228 09:30:00.996383 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl"] Feb 28 09:30:01 crc kubenswrapper[4996]: I0228 09:30:01.034152 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:30:01 crc kubenswrapper[4996]: E0228 09:30:01.034420 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:30:01 crc kubenswrapper[4996]: I0228 09:30:01.209587 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537850-rllwb" event={"ID":"5026f9bd-e6a2-4b42-b7cb-eefed7e5a187","Type":"ContainerStarted","Data":"e6be6714c5b2ec821e0493e0f442b9d9361af12dcaaf19f9a5a8906202b6a41a"} Feb 28 09:30:01 crc kubenswrapper[4996]: I0228 09:30:01.214150 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" event={"ID":"5b6dc4fe-51a8-4244-bbbb-18a4d1184814","Type":"ContainerStarted","Data":"7fe2cf36dde421516b03530e82c970cbc08b186e50c637eba10486929f3d59e8"} Feb 28 09:30:02 crc kubenswrapper[4996]: I0228 09:30:02.227929 4996 generic.go:334] "Generic (PLEG): container finished" podID="5b6dc4fe-51a8-4244-bbbb-18a4d1184814" containerID="07e9ee6800423ae08488948360e9dc26b880e3482f4a81c5d0dcedcdb17c9e6e" exitCode=0 Feb 28 09:30:02 crc kubenswrapper[4996]: I0228 09:30:02.227975 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" event={"ID":"5b6dc4fe-51a8-4244-bbbb-18a4d1184814","Type":"ContainerDied","Data":"07e9ee6800423ae08488948360e9dc26b880e3482f4a81c5d0dcedcdb17c9e6e"} Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.239120 4996 generic.go:334] "Generic (PLEG): container finished" podID="5026f9bd-e6a2-4b42-b7cb-eefed7e5a187" containerID="014891e052fad6da40ac1b8051d1ca1459aedeff2b06bccc33d2f037e895be5d" exitCode=0 Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.239283 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537850-rllwb" event={"ID":"5026f9bd-e6a2-4b42-b7cb-eefed7e5a187","Type":"ContainerDied","Data":"014891e052fad6da40ac1b8051d1ca1459aedeff2b06bccc33d2f037e895be5d"} Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.625447 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.723558 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ts7w\" (UniqueName: \"kubernetes.io/projected/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-kube-api-access-4ts7w\") pod \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.723626 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-config-volume\") pod \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.723692 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-secret-volume\") pod \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\" (UID: \"5b6dc4fe-51a8-4244-bbbb-18a4d1184814\") " Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.724646 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b6dc4fe-51a8-4244-bbbb-18a4d1184814" (UID: "5b6dc4fe-51a8-4244-bbbb-18a4d1184814"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.729195 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-kube-api-access-4ts7w" (OuterVolumeSpecName: "kube-api-access-4ts7w") pod "5b6dc4fe-51a8-4244-bbbb-18a4d1184814" (UID: "5b6dc4fe-51a8-4244-bbbb-18a4d1184814"). InnerVolumeSpecName "kube-api-access-4ts7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.729393 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b6dc4fe-51a8-4244-bbbb-18a4d1184814" (UID: "5b6dc4fe-51a8-4244-bbbb-18a4d1184814"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.826067 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.826108 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ts7w\" (UniqueName: \"kubernetes.io/projected/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-kube-api-access-4ts7w\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:03 crc kubenswrapper[4996]: I0228 09:30:03.826117 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b6dc4fe-51a8-4244-bbbb-18a4d1184814-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:04 crc kubenswrapper[4996]: I0228 09:30:04.251210 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" Feb 28 09:30:04 crc kubenswrapper[4996]: I0228 09:30:04.251209 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl" event={"ID":"5b6dc4fe-51a8-4244-bbbb-18a4d1184814","Type":"ContainerDied","Data":"7fe2cf36dde421516b03530e82c970cbc08b186e50c637eba10486929f3d59e8"} Feb 28 09:30:04 crc kubenswrapper[4996]: I0228 09:30:04.251691 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fe2cf36dde421516b03530e82c970cbc08b186e50c637eba10486929f3d59e8" Feb 28 09:30:04 crc kubenswrapper[4996]: I0228 09:30:04.538154 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537850-rllwb" Feb 28 09:30:04 crc kubenswrapper[4996]: I0228 09:30:04.639256 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8k4z\" (UniqueName: \"kubernetes.io/projected/5026f9bd-e6a2-4b42-b7cb-eefed7e5a187-kube-api-access-x8k4z\") pod \"5026f9bd-e6a2-4b42-b7cb-eefed7e5a187\" (UID: \"5026f9bd-e6a2-4b42-b7cb-eefed7e5a187\") " Feb 28 09:30:04 crc kubenswrapper[4996]: I0228 09:30:04.645286 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5026f9bd-e6a2-4b42-b7cb-eefed7e5a187-kube-api-access-x8k4z" (OuterVolumeSpecName: "kube-api-access-x8k4z") pod "5026f9bd-e6a2-4b42-b7cb-eefed7e5a187" (UID: "5026f9bd-e6a2-4b42-b7cb-eefed7e5a187"). InnerVolumeSpecName "kube-api-access-x8k4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:30:04 crc kubenswrapper[4996]: I0228 09:30:04.741878 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8k4z\" (UniqueName: \"kubernetes.io/projected/5026f9bd-e6a2-4b42-b7cb-eefed7e5a187-kube-api-access-x8k4z\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:05 crc kubenswrapper[4996]: I0228 09:30:05.264906 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537850-rllwb" event={"ID":"5026f9bd-e6a2-4b42-b7cb-eefed7e5a187","Type":"ContainerDied","Data":"e6be6714c5b2ec821e0493e0f442b9d9361af12dcaaf19f9a5a8906202b6a41a"} Feb 28 09:30:05 crc kubenswrapper[4996]: I0228 09:30:05.264965 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537850-rllwb" Feb 28 09:30:05 crc kubenswrapper[4996]: I0228 09:30:05.264971 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6be6714c5b2ec821e0493e0f442b9d9361af12dcaaf19f9a5a8906202b6a41a" Feb 28 09:30:05 crc kubenswrapper[4996]: I0228 09:30:05.613133 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537844-nc5cg"] Feb 28 09:30:05 crc kubenswrapper[4996]: I0228 09:30:05.628498 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537844-nc5cg"] Feb 28 09:30:07 crc kubenswrapper[4996]: I0228 09:30:07.070731 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d715edd-156b-434f-8c63-0f6ef5314659" path="/var/lib/kubelet/pods/6d715edd-156b-434f-8c63-0f6ef5314659/volumes" Feb 28 09:30:13 crc kubenswrapper[4996]: I0228 09:30:13.031661 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dw7wp"] Feb 28 09:30:13 crc kubenswrapper[4996]: I0228 09:30:13.047174 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dw7wp"] Feb 28 09:30:15 crc kubenswrapper[4996]: I0228 09:30:15.033205 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:30:15 crc kubenswrapper[4996]: E0228 09:30:15.034090 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:30:15 crc kubenswrapper[4996]: I0228 09:30:15.052486 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca969dbf-f76c-4c52-b619-0c85dd8a7f61" path="/var/lib/kubelet/pods/ca969dbf-f76c-4c52-b619-0c85dd8a7f61/volumes" Feb 28 09:30:23 crc kubenswrapper[4996]: I0228 09:30:23.447078 4996 generic.go:334] "Generic (PLEG): container finished" podID="6a28e9ec-760a-4b1a-93ec-bdca318ebe00" containerID="a19ee68511e5256776041fab6f42469d148fea0460b2048dc9920b795a9a45d5" exitCode=0 Feb 28 09:30:23 crc kubenswrapper[4996]: I0228 09:30:23.447129 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" event={"ID":"6a28e9ec-760a-4b1a-93ec-bdca318ebe00","Type":"ContainerDied","Data":"a19ee68511e5256776041fab6f42469d148fea0460b2048dc9920b795a9a45d5"} Feb 28 09:30:24 crc kubenswrapper[4996]: I0228 09:30:24.887378 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:30:24 crc kubenswrapper[4996]: I0228 09:30:24.996855 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt2w9\" (UniqueName: \"kubernetes.io/projected/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-kube-api-access-wt2w9\") pod \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " Feb 28 09:30:24 crc kubenswrapper[4996]: I0228 09:30:24.996939 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-inventory\") pod \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:24.997220 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-ssh-key-openstack-edpm-ipam\") pod \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\" (UID: \"6a28e9ec-760a-4b1a-93ec-bdca318ebe00\") " Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.016299 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-kube-api-access-wt2w9" (OuterVolumeSpecName: "kube-api-access-wt2w9") pod "6a28e9ec-760a-4b1a-93ec-bdca318ebe00" (UID: "6a28e9ec-760a-4b1a-93ec-bdca318ebe00"). InnerVolumeSpecName "kube-api-access-wt2w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.044180 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6a28e9ec-760a-4b1a-93ec-bdca318ebe00" (UID: "6a28e9ec-760a-4b1a-93ec-bdca318ebe00"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.095223 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-inventory" (OuterVolumeSpecName: "inventory") pod "6a28e9ec-760a-4b1a-93ec-bdca318ebe00" (UID: "6a28e9ec-760a-4b1a-93ec-bdca318ebe00"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.101175 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.101214 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.101225 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt2w9\" (UniqueName: \"kubernetes.io/projected/6a28e9ec-760a-4b1a-93ec-bdca318ebe00-kube-api-access-wt2w9\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.475240 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" event={"ID":"6a28e9ec-760a-4b1a-93ec-bdca318ebe00","Type":"ContainerDied","Data":"abd4d3aa4d1dfaa38212a2926f6c569e5b527beeac5ed2da85c7b5b375eb6218"} Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.475316 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abd4d3aa4d1dfaa38212a2926f6c569e5b527beeac5ed2da85c7b5b375eb6218" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.475424 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.570907 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6"] Feb 28 09:30:25 crc kubenswrapper[4996]: E0228 09:30:25.571311 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a28e9ec-760a-4b1a-93ec-bdca318ebe00" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.571333 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a28e9ec-760a-4b1a-93ec-bdca318ebe00" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:30:25 crc kubenswrapper[4996]: E0228 09:30:25.571358 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6dc4fe-51a8-4244-bbbb-18a4d1184814" containerName="collect-profiles" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.571365 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6dc4fe-51a8-4244-bbbb-18a4d1184814" containerName="collect-profiles" Feb 28 09:30:25 crc kubenswrapper[4996]: E0228 09:30:25.571386 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5026f9bd-e6a2-4b42-b7cb-eefed7e5a187" containerName="oc" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.571392 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5026f9bd-e6a2-4b42-b7cb-eefed7e5a187" containerName="oc" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.571548 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5026f9bd-e6a2-4b42-b7cb-eefed7e5a187" containerName="oc" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.571568 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6dc4fe-51a8-4244-bbbb-18a4d1184814" containerName="collect-profiles" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.571591 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a28e9ec-760a-4b1a-93ec-bdca318ebe00" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.572270 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.575678 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.576092 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.576208 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.576397 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.601788 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6"] Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.713141 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.713496 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbqtg\" (UniqueName: \"kubernetes.io/projected/49f511eb-b1e0-4c7a-a26d-49fad3305cee-kube-api-access-xbqtg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.713585 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.815419 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.815485 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbqtg\" (UniqueName: \"kubernetes.io/projected/49f511eb-b1e0-4c7a-a26d-49fad3305cee-kube-api-access-xbqtg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.815587 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.821803 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.822105 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.852591 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbqtg\" (UniqueName: \"kubernetes.io/projected/49f511eb-b1e0-4c7a-a26d-49fad3305cee-kube-api-access-xbqtg\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:25 crc kubenswrapper[4996]: I0228 09:30:25.903818 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:26 crc kubenswrapper[4996]: I0228 09:30:26.063814 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5882-account-create-update-s4s2j"] Feb 28 09:30:26 crc kubenswrapper[4996]: I0228 09:30:26.079883 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-q6mpl"] Feb 28 09:30:26 crc kubenswrapper[4996]: I0228 09:30:26.091346 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-q6mpl"] Feb 28 09:30:26 crc kubenswrapper[4996]: I0228 09:30:26.099342 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5882-account-create-update-s4s2j"] Feb 28 09:30:26 crc kubenswrapper[4996]: I0228 09:30:26.509896 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6"] Feb 28 09:30:26 crc kubenswrapper[4996]: W0228 09:30:26.511172 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49f511eb_b1e0_4c7a_a26d_49fad3305cee.slice/crio-9a4b995ccbf739eec66ceaa6da4c2e7b1dc1900d2c1a24acfc72ee950ec0f55d WatchSource:0}: Error finding container 9a4b995ccbf739eec66ceaa6da4c2e7b1dc1900d2c1a24acfc72ee950ec0f55d: Status 404 returned error can't find the container with id 9a4b995ccbf739eec66ceaa6da4c2e7b1dc1900d2c1a24acfc72ee950ec0f55d Feb 28 09:30:27 crc kubenswrapper[4996]: I0228 09:30:27.049435 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651f6398-d609-4a58-9b97-4bff8aff24cd" path="/var/lib/kubelet/pods/651f6398-d609-4a58-9b97-4bff8aff24cd/volumes" Feb 28 09:30:27 crc kubenswrapper[4996]: I0228 09:30:27.051468 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7daa6505-5dd3-48bd-bba3-18c707ea38ed" path="/var/lib/kubelet/pods/7daa6505-5dd3-48bd-bba3-18c707ea38ed/volumes" Feb 28 09:30:27 crc kubenswrapper[4996]: I0228 09:30:27.494518 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" event={"ID":"49f511eb-b1e0-4c7a-a26d-49fad3305cee","Type":"ContainerStarted","Data":"94b0046a3617c6f9be1f3afc5a5feb6ea7d694ca88eea2260a4c212d9d7f5224"} Feb 28 09:30:27 crc kubenswrapper[4996]: I0228 09:30:27.494822 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" event={"ID":"49f511eb-b1e0-4c7a-a26d-49fad3305cee","Type":"ContainerStarted","Data":"9a4b995ccbf739eec66ceaa6da4c2e7b1dc1900d2c1a24acfc72ee950ec0f55d"} Feb 28 09:30:27 crc kubenswrapper[4996]: I0228 09:30:27.521450 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" podStartSLOduration=2.1212515180000002 podStartE2EDuration="2.521428084s" podCreationTimestamp="2026-02-28 09:30:25 +0000 UTC" firstStartedPulling="2026-02-28 09:30:26.514841781 +0000 UTC m=+1790.205644592" lastFinishedPulling="2026-02-28 09:30:26.915018347 +0000 UTC m=+1790.605821158" observedRunningTime="2026-02-28 09:30:27.512979248 +0000 UTC m=+1791.203782129" watchObservedRunningTime="2026-02-28 09:30:27.521428084 +0000 UTC m=+1791.212230905" Feb 28 09:30:29 crc kubenswrapper[4996]: I0228 09:30:29.033854 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:30:29 crc kubenswrapper[4996]: E0228 09:30:29.034652 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:30:30 crc kubenswrapper[4996]: I0228 09:30:30.035711 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-247f-account-create-update-spp2j"] Feb 28 09:30:30 crc kubenswrapper[4996]: I0228 09:30:30.043551 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-aabe-account-create-update-lnvhh"] Feb 28 09:30:30 crc kubenswrapper[4996]: I0228 09:30:30.056326 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-247f-account-create-update-spp2j"] Feb 28 09:30:30 crc kubenswrapper[4996]: I0228 09:30:30.066138 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-bgwbh"] Feb 28 09:30:30 crc kubenswrapper[4996]: I0228 09:30:30.071886 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-aabe-account-create-update-lnvhh"] Feb 28 09:30:30 crc kubenswrapper[4996]: I0228 09:30:30.079713 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-m6m58"] Feb 28 09:30:30 crc kubenswrapper[4996]: I0228 09:30:30.090563 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-m6m58"] Feb 28 09:30:30 crc kubenswrapper[4996]: I0228 09:30:30.100139 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-bgwbh"] Feb 28 09:30:31 crc kubenswrapper[4996]: I0228 09:30:31.057168 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef240be-40f4-4734-81bb-95b0b99a83b7" path="/var/lib/kubelet/pods/7ef240be-40f4-4734-81bb-95b0b99a83b7/volumes" Feb 28 09:30:31 crc kubenswrapper[4996]: I0228 09:30:31.057882 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c05a8ea6-88b3-4771-91db-da109123131d" path="/var/lib/kubelet/pods/c05a8ea6-88b3-4771-91db-da109123131d/volumes" Feb 28 09:30:31 crc kubenswrapper[4996]: I0228 09:30:31.058425 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5696d56-384e-47a4-bff3-cc4a07264817" path="/var/lib/kubelet/pods/d5696d56-384e-47a4-bff3-cc4a07264817/volumes" Feb 28 09:30:31 crc kubenswrapper[4996]: I0228 09:30:31.058975 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe096774-7016-4037-93fb-e0154de207ba" path="/var/lib/kubelet/pods/fe096774-7016-4037-93fb-e0154de207ba/volumes" Feb 28 09:30:31 crc kubenswrapper[4996]: I0228 09:30:31.580174 4996 generic.go:334] "Generic (PLEG): container finished" podID="49f511eb-b1e0-4c7a-a26d-49fad3305cee" containerID="94b0046a3617c6f9be1f3afc5a5feb6ea7d694ca88eea2260a4c212d9d7f5224" exitCode=0 Feb 28 09:30:31 crc kubenswrapper[4996]: I0228 09:30:31.580236 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" event={"ID":"49f511eb-b1e0-4c7a-a26d-49fad3305cee","Type":"ContainerDied","Data":"94b0046a3617c6f9be1f3afc5a5feb6ea7d694ca88eea2260a4c212d9d7f5224"} Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.050150 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.154419 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbqtg\" (UniqueName: \"kubernetes.io/projected/49f511eb-b1e0-4c7a-a26d-49fad3305cee-kube-api-access-xbqtg\") pod \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.154533 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-ssh-key-openstack-edpm-ipam\") pod \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.154587 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-inventory\") pod \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\" (UID: \"49f511eb-b1e0-4c7a-a26d-49fad3305cee\") " Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.160606 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f511eb-b1e0-4c7a-a26d-49fad3305cee-kube-api-access-xbqtg" (OuterVolumeSpecName: "kube-api-access-xbqtg") pod "49f511eb-b1e0-4c7a-a26d-49fad3305cee" (UID: "49f511eb-b1e0-4c7a-a26d-49fad3305cee"). InnerVolumeSpecName "kube-api-access-xbqtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.181916 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49f511eb-b1e0-4c7a-a26d-49fad3305cee" (UID: "49f511eb-b1e0-4c7a-a26d-49fad3305cee"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.202377 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-inventory" (OuterVolumeSpecName: "inventory") pod "49f511eb-b1e0-4c7a-a26d-49fad3305cee" (UID: "49f511eb-b1e0-4c7a-a26d-49fad3305cee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.256884 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbqtg\" (UniqueName: \"kubernetes.io/projected/49f511eb-b1e0-4c7a-a26d-49fad3305cee-kube-api-access-xbqtg\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.256925 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.256935 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f511eb-b1e0-4c7a-a26d-49fad3305cee-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.605169 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" event={"ID":"49f511eb-b1e0-4c7a-a26d-49fad3305cee","Type":"ContainerDied","Data":"9a4b995ccbf739eec66ceaa6da4c2e7b1dc1900d2c1a24acfc72ee950ec0f55d"} Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.605225 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4b995ccbf739eec66ceaa6da4c2e7b1dc1900d2c1a24acfc72ee950ec0f55d" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.605302 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.681155 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6"] Feb 28 09:30:33 crc kubenswrapper[4996]: E0228 09:30:33.681492 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f511eb-b1e0-4c7a-a26d-49fad3305cee" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.681510 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f511eb-b1e0-4c7a-a26d-49fad3305cee" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.681674 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f511eb-b1e0-4c7a-a26d-49fad3305cee" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.682221 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.685672 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.686100 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.686272 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.686594 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.691865 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6"] Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.765038 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.765115 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnvjk\" (UniqueName: \"kubernetes.io/projected/4894e3d4-6f41-4761-974a-7a150702e852-kube-api-access-cnvjk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.765411 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.866734 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.866822 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnvjk\" (UniqueName: \"kubernetes.io/projected/4894e3d4-6f41-4761-974a-7a150702e852-kube-api-access-cnvjk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.866925 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.871624 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.873209 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:33 crc kubenswrapper[4996]: I0228 09:30:33.892479 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnvjk\" (UniqueName: \"kubernetes.io/projected/4894e3d4-6f41-4761-974a-7a150702e852-kube-api-access-cnvjk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:34 crc kubenswrapper[4996]: I0228 09:30:33.999972 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:30:34 crc kubenswrapper[4996]: I0228 09:30:34.039236 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-wxpd2"] Feb 28 09:30:34 crc kubenswrapper[4996]: I0228 09:30:34.049612 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-wxpd2"] Feb 28 09:30:34 crc kubenswrapper[4996]: I0228 09:30:34.599323 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6"] Feb 28 09:30:34 crc kubenswrapper[4996]: W0228 09:30:34.607749 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4894e3d4_6f41_4761_974a_7a150702e852.slice/crio-43f585b56f3b6b16d0040a86a5eecc3a2f3942425e6ed8ed50c8802c8bd5c4ca WatchSource:0}: Error finding container 43f585b56f3b6b16d0040a86a5eecc3a2f3942425e6ed8ed50c8802c8bd5c4ca: Status 404 returned error can't find the container with id 43f585b56f3b6b16d0040a86a5eecc3a2f3942425e6ed8ed50c8802c8bd5c4ca Feb 28 09:30:35 crc kubenswrapper[4996]: I0228 09:30:35.045079 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2620062d-d286-4cae-b123-b53cf5c9f71a" path="/var/lib/kubelet/pods/2620062d-d286-4cae-b123-b53cf5c9f71a/volumes" Feb 28 09:30:35 crc kubenswrapper[4996]: I0228 09:30:35.626424 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" event={"ID":"4894e3d4-6f41-4761-974a-7a150702e852","Type":"ContainerStarted","Data":"6daaa3ca882eab2bcc77f937ed99605ee5a323616f5dd925c69757ad2b120341"} Feb 28 09:30:35 crc kubenswrapper[4996]: I0228 09:30:35.626470 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" event={"ID":"4894e3d4-6f41-4761-974a-7a150702e852","Type":"ContainerStarted","Data":"43f585b56f3b6b16d0040a86a5eecc3a2f3942425e6ed8ed50c8802c8bd5c4ca"} Feb 28 09:30:35 crc kubenswrapper[4996]: I0228 09:30:35.667655 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" podStartSLOduration=2.281579416 podStartE2EDuration="2.667635527s" podCreationTimestamp="2026-02-28 09:30:33 +0000 UTC" firstStartedPulling="2026-02-28 09:30:34.610538535 +0000 UTC m=+1798.301341356" lastFinishedPulling="2026-02-28 09:30:34.996594656 +0000 UTC m=+1798.687397467" observedRunningTime="2026-02-28 09:30:35.657538172 +0000 UTC m=+1799.348341023" watchObservedRunningTime="2026-02-28 09:30:35.667635527 +0000 UTC m=+1799.358438348" Feb 28 09:30:41 crc kubenswrapper[4996]: I0228 09:30:41.777992 4996 scope.go:117] "RemoveContainer" containerID="7bb00be0a06875039914d5a28d12b673a5d93c25289d55b6a3d0303ff3528922" Feb 28 09:30:41 crc kubenswrapper[4996]: I0228 09:30:41.827466 4996 scope.go:117] "RemoveContainer" containerID="a5743272415b1eb8d207e7122aa11baaaa5cadec6a5b1a4915f0f55b78a78996" Feb 28 09:30:41 crc kubenswrapper[4996]: I0228 09:30:41.897359 4996 scope.go:117] "RemoveContainer" containerID="b49a7caf8a70f9cdd3f8024b64460594befd85a7d744cd3f1e5a9b17f6e7bab0" Feb 28 09:30:41 crc kubenswrapper[4996]: I0228 09:30:41.972278 4996 scope.go:117] "RemoveContainer" containerID="4d477925c3d293ff1c7a2512303a1d3fce654d456ea197429f1538bdb124f868" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.016954 4996 scope.go:117] "RemoveContainer" containerID="418b09438ba9d1f12b752434c9962998e7394ee312b84bc617736280230932da" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.041756 4996 scope.go:117] "RemoveContainer" containerID="4d748a411cd8fc0be453f8d30296804bd3d2ad631c2bb72621290c6e107ec6c0" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.074356 4996 scope.go:117] "RemoveContainer" containerID="a3e90e4f95ce60d7af4a10c5dfa83e9d064eeb47a1c206fba595bf78b8946f32" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.100055 4996 scope.go:117] "RemoveContainer" containerID="ed87c70d05e7fadf9caffe1331dbdf94e2804fde193dd414a3ff8797955223ac" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.143913 4996 scope.go:117] "RemoveContainer" containerID="9240a290b363b794b32436d5b3be28e3a2db690532e7124f014b0fb37788f548" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.171472 4996 scope.go:117] "RemoveContainer" containerID="f3ae16b9c8cd14c9121bca722a2863f4cc5b4e608cf91a5452e3b8bd2d899ed9" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.192918 4996 scope.go:117] "RemoveContainer" containerID="7edda3a13769aeeeabe7e4a3f8bafec913c6500c465272b973321eea6d9358ff" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.214803 4996 scope.go:117] "RemoveContainer" containerID="5785386447ca3373e5481d45f92674f1917704bef61d2860d83804684e07b3db" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.231617 4996 scope.go:117] "RemoveContainer" containerID="7a37adb861d87998a7154f36713ec32333e314142466d100f8cfb1d451814313" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.253067 4996 scope.go:117] "RemoveContainer" containerID="fec0214ab90dd3d59c492ea9160c63057276368d55fcab40c560349b98028e8b" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.274322 4996 scope.go:117] "RemoveContainer" containerID="51a8bec5d86469aeacdb0fa3b83cbf3e46596863470b0d72d93ae684933eea54" Feb 28 09:30:42 crc kubenswrapper[4996]: I0228 09:30:42.303787 4996 scope.go:117] "RemoveContainer" containerID="4a321b218126663abcdde2341e1597440e4466f0ebb2710cae193694c7bd5513" Feb 28 09:30:43 crc kubenswrapper[4996]: I0228 09:30:43.034041 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:30:43 crc kubenswrapper[4996]: E0228 09:30:43.034954 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:30:54 crc kubenswrapper[4996]: I0228 09:30:54.035327 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:30:54 crc kubenswrapper[4996]: E0228 09:30:54.039164 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:31:05 crc kubenswrapper[4996]: I0228 09:31:05.063198 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-nmp42"] Feb 28 09:31:05 crc kubenswrapper[4996]: I0228 09:31:05.075924 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-nmp42"] Feb 28 09:31:07 crc kubenswrapper[4996]: I0228 09:31:07.056761 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3113a2ed-c04d-4f18-9f1d-a47482aa76fa" path="/var/lib/kubelet/pods/3113a2ed-c04d-4f18-9f1d-a47482aa76fa/volumes" Feb 28 09:31:07 crc kubenswrapper[4996]: I0228 09:31:07.058579 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-j2wgr"] Feb 28 09:31:07 crc kubenswrapper[4996]: I0228 09:31:07.058768 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-j2wgr"] Feb 28 09:31:08 crc kubenswrapper[4996]: I0228 09:31:08.033272 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:31:08 crc kubenswrapper[4996]: E0228 09:31:08.034093 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:31:09 crc kubenswrapper[4996]: I0228 09:31:09.045546 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05164ff9-4bc2-433a-881c-5046c3352637" path="/var/lib/kubelet/pods/05164ff9-4bc2-433a-881c-5046c3352637/volumes" Feb 28 09:31:13 crc kubenswrapper[4996]: I0228 09:31:13.069963 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-svn7g"] Feb 28 09:31:13 crc kubenswrapper[4996]: I0228 09:31:13.070893 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-svn7g"] Feb 28 09:31:15 crc kubenswrapper[4996]: I0228 09:31:15.043423 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec2d69b-af74-41fe-b5eb-2cd05e40ffde" path="/var/lib/kubelet/pods/aec2d69b-af74-41fe-b5eb-2cd05e40ffde/volumes" Feb 28 09:31:23 crc kubenswrapper[4996]: I0228 09:31:23.033335 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:31:23 crc kubenswrapper[4996]: E0228 09:31:23.034354 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:31:26 crc kubenswrapper[4996]: I0228 09:31:26.050187 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2fdzl"] Feb 28 09:31:26 crc kubenswrapper[4996]: I0228 09:31:26.060959 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2fdzl"] Feb 28 09:31:26 crc kubenswrapper[4996]: I0228 09:31:26.141378 4996 generic.go:334] "Generic (PLEG): container finished" podID="4894e3d4-6f41-4761-974a-7a150702e852" containerID="6daaa3ca882eab2bcc77f937ed99605ee5a323616f5dd925c69757ad2b120341" exitCode=0 Feb 28 09:31:26 crc kubenswrapper[4996]: I0228 09:31:26.141436 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" event={"ID":"4894e3d4-6f41-4761-974a-7a150702e852","Type":"ContainerDied","Data":"6daaa3ca882eab2bcc77f937ed99605ee5a323616f5dd925c69757ad2b120341"} Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.053128 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="531cd3d1-8618-42d1-88a1-b23b8ca9be62" path="/var/lib/kubelet/pods/531cd3d1-8618-42d1-88a1-b23b8ca9be62/volumes" Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.574733 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.762142 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-ssh-key-openstack-edpm-ipam\") pod \"4894e3d4-6f41-4761-974a-7a150702e852\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.763002 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-inventory\") pod \"4894e3d4-6f41-4761-974a-7a150702e852\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.763280 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnvjk\" (UniqueName: \"kubernetes.io/projected/4894e3d4-6f41-4761-974a-7a150702e852-kube-api-access-cnvjk\") pod \"4894e3d4-6f41-4761-974a-7a150702e852\" (UID: \"4894e3d4-6f41-4761-974a-7a150702e852\") " Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.768419 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4894e3d4-6f41-4761-974a-7a150702e852-kube-api-access-cnvjk" (OuterVolumeSpecName: "kube-api-access-cnvjk") pod "4894e3d4-6f41-4761-974a-7a150702e852" (UID: "4894e3d4-6f41-4761-974a-7a150702e852"). InnerVolumeSpecName "kube-api-access-cnvjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.791853 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-inventory" (OuterVolumeSpecName: "inventory") pod "4894e3d4-6f41-4761-974a-7a150702e852" (UID: "4894e3d4-6f41-4761-974a-7a150702e852"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.800105 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4894e3d4-6f41-4761-974a-7a150702e852" (UID: "4894e3d4-6f41-4761-974a-7a150702e852"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.867237 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.867264 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnvjk\" (UniqueName: \"kubernetes.io/projected/4894e3d4-6f41-4761-974a-7a150702e852-kube-api-access-cnvjk\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:27 crc kubenswrapper[4996]: I0228 09:31:27.867274 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4894e3d4-6f41-4761-974a-7a150702e852-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.028918 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-kplwp"] Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.035476 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-kplwp"] Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.165194 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" event={"ID":"4894e3d4-6f41-4761-974a-7a150702e852","Type":"ContainerDied","Data":"43f585b56f3b6b16d0040a86a5eecc3a2f3942425e6ed8ed50c8802c8bd5c4ca"} Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.165247 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43f585b56f3b6b16d0040a86a5eecc3a2f3942425e6ed8ed50c8802c8bd5c4ca" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.165245 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.233427 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g2gd4"] Feb 28 09:31:28 crc kubenswrapper[4996]: E0228 09:31:28.233756 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4894e3d4-6f41-4761-974a-7a150702e852" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.233773 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4894e3d4-6f41-4761-974a-7a150702e852" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.233966 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4894e3d4-6f41-4761-974a-7a150702e852" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.234508 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.236485 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.238276 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.238330 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.238345 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.251232 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g2gd4"] Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.376979 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.377187 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.377268 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncdxm\" (UniqueName: \"kubernetes.io/projected/187619a6-6bc6-4fce-a88e-f13ccf565e4d-kube-api-access-ncdxm\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.479276 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncdxm\" (UniqueName: \"kubernetes.io/projected/187619a6-6bc6-4fce-a88e-f13ccf565e4d-kube-api-access-ncdxm\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.479429 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.479633 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.487133 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.494895 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.508163 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncdxm\" (UniqueName: \"kubernetes.io/projected/187619a6-6bc6-4fce-a88e-f13ccf565e4d-kube-api-access-ncdxm\") pod \"ssh-known-hosts-edpm-deployment-g2gd4\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:28 crc kubenswrapper[4996]: I0228 09:31:28.571493 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:29 crc kubenswrapper[4996]: I0228 09:31:29.049220 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db0401da-7bc1-4203-bdfb-2a06deade35b" path="/var/lib/kubelet/pods/db0401da-7bc1-4203-bdfb-2a06deade35b/volumes" Feb 28 09:31:29 crc kubenswrapper[4996]: I0228 09:31:29.135939 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g2gd4"] Feb 28 09:31:29 crc kubenswrapper[4996]: I0228 09:31:29.142802 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:31:29 crc kubenswrapper[4996]: I0228 09:31:29.174549 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" event={"ID":"187619a6-6bc6-4fce-a88e-f13ccf565e4d","Type":"ContainerStarted","Data":"ed0120d1e3519cea963a0a0090855c468d83862760d440df5a1e0b9b6271cd99"} Feb 28 09:31:30 crc kubenswrapper[4996]: I0228 09:31:30.194272 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" event={"ID":"187619a6-6bc6-4fce-a88e-f13ccf565e4d","Type":"ContainerStarted","Data":"c58a800ec77d489e2a360ad16bd16a8e07c187ce0d161d13dd5122ace80012c5"} Feb 28 09:31:30 crc kubenswrapper[4996]: I0228 09:31:30.217093 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" podStartSLOduration=1.8166833100000002 podStartE2EDuration="2.217073981s" podCreationTimestamp="2026-02-28 09:31:28 +0000 UTC" firstStartedPulling="2026-02-28 09:31:29.142570234 +0000 UTC m=+1852.833373045" lastFinishedPulling="2026-02-28 09:31:29.542960865 +0000 UTC m=+1853.233763716" observedRunningTime="2026-02-28 09:31:30.210926231 +0000 UTC m=+1853.901729042" watchObservedRunningTime="2026-02-28 09:31:30.217073981 +0000 UTC m=+1853.907876792" Feb 28 09:31:35 crc kubenswrapper[4996]: I0228 09:31:35.033892 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:31:35 crc kubenswrapper[4996]: E0228 09:31:35.036070 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:31:37 crc kubenswrapper[4996]: I0228 09:31:37.270377 4996 generic.go:334] "Generic (PLEG): container finished" podID="187619a6-6bc6-4fce-a88e-f13ccf565e4d" containerID="c58a800ec77d489e2a360ad16bd16a8e07c187ce0d161d13dd5122ace80012c5" exitCode=0 Feb 28 09:31:37 crc kubenswrapper[4996]: I0228 09:31:37.270430 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" event={"ID":"187619a6-6bc6-4fce-a88e-f13ccf565e4d","Type":"ContainerDied","Data":"c58a800ec77d489e2a360ad16bd16a8e07c187ce0d161d13dd5122ace80012c5"} Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.708435 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.886779 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncdxm\" (UniqueName: \"kubernetes.io/projected/187619a6-6bc6-4fce-a88e-f13ccf565e4d-kube-api-access-ncdxm\") pod \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.886875 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-inventory-0\") pod \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.887109 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-ssh-key-openstack-edpm-ipam\") pod \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\" (UID: \"187619a6-6bc6-4fce-a88e-f13ccf565e4d\") " Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.894488 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/187619a6-6bc6-4fce-a88e-f13ccf565e4d-kube-api-access-ncdxm" (OuterVolumeSpecName: "kube-api-access-ncdxm") pod "187619a6-6bc6-4fce-a88e-f13ccf565e4d" (UID: "187619a6-6bc6-4fce-a88e-f13ccf565e4d"). InnerVolumeSpecName "kube-api-access-ncdxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.918543 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "187619a6-6bc6-4fce-a88e-f13ccf565e4d" (UID: "187619a6-6bc6-4fce-a88e-f13ccf565e4d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.921860 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "187619a6-6bc6-4fce-a88e-f13ccf565e4d" (UID: "187619a6-6bc6-4fce-a88e-f13ccf565e4d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.989328 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.989373 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncdxm\" (UniqueName: \"kubernetes.io/projected/187619a6-6bc6-4fce-a88e-f13ccf565e4d-kube-api-access-ncdxm\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:38 crc kubenswrapper[4996]: I0228 09:31:38.989388 4996 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/187619a6-6bc6-4fce-a88e-f13ccf565e4d-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.297337 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" event={"ID":"187619a6-6bc6-4fce-a88e-f13ccf565e4d","Type":"ContainerDied","Data":"ed0120d1e3519cea963a0a0090855c468d83862760d440df5a1e0b9b6271cd99"} Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.297639 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed0120d1e3519cea963a0a0090855c468d83862760d440df5a1e0b9b6271cd99" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.297454 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-g2gd4" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.460710 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s"] Feb 28 09:31:39 crc kubenswrapper[4996]: E0228 09:31:39.461057 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="187619a6-6bc6-4fce-a88e-f13ccf565e4d" containerName="ssh-known-hosts-edpm-deployment" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.461074 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="187619a6-6bc6-4fce-a88e-f13ccf565e4d" containerName="ssh-known-hosts-edpm-deployment" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.461251 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="187619a6-6bc6-4fce-a88e-f13ccf565e4d" containerName="ssh-known-hosts-edpm-deployment" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.461868 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.465873 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.465991 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.466564 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.466881 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.481467 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s"] Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.599773 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.599875 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7fg\" (UniqueName: \"kubernetes.io/projected/55c25f7f-b29b-4504-85d7-e2be62f5ed22-kube-api-access-pw7fg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.599926 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.701877 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.701992 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7fg\" (UniqueName: \"kubernetes.io/projected/55c25f7f-b29b-4504-85d7-e2be62f5ed22-kube-api-access-pw7fg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.702060 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.708903 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.709160 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.737971 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7fg\" (UniqueName: \"kubernetes.io/projected/55c25f7f-b29b-4504-85d7-e2be62f5ed22-kube-api-access-pw7fg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-v2d7s\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:39 crc kubenswrapper[4996]: I0228 09:31:39.843860 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:40 crc kubenswrapper[4996]: I0228 09:31:40.215107 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s"] Feb 28 09:31:40 crc kubenswrapper[4996]: W0228 09:31:40.223597 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55c25f7f_b29b_4504_85d7_e2be62f5ed22.slice/crio-cd4b498983777c6d1e3bf57e7df3ae8c7993c33ca09c5021b967bbea9f781d13 WatchSource:0}: Error finding container cd4b498983777c6d1e3bf57e7df3ae8c7993c33ca09c5021b967bbea9f781d13: Status 404 returned error can't find the container with id cd4b498983777c6d1e3bf57e7df3ae8c7993c33ca09c5021b967bbea9f781d13 Feb 28 09:31:40 crc kubenswrapper[4996]: I0228 09:31:40.305438 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" event={"ID":"55c25f7f-b29b-4504-85d7-e2be62f5ed22","Type":"ContainerStarted","Data":"cd4b498983777c6d1e3bf57e7df3ae8c7993c33ca09c5021b967bbea9f781d13"} Feb 28 09:31:41 crc kubenswrapper[4996]: I0228 09:31:41.316521 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" event={"ID":"55c25f7f-b29b-4504-85d7-e2be62f5ed22","Type":"ContainerStarted","Data":"5859032b9970d101704611da5d31cb80a4aa6bf342810fbb8db26762e8d83386"} Feb 28 09:31:42 crc kubenswrapper[4996]: I0228 09:31:42.606202 4996 scope.go:117] "RemoveContainer" containerID="14c78f8e5f74c5db001d96c96cd93a4d11bacab40797dd985e28ad02ff86d07c" Feb 28 09:31:42 crc kubenswrapper[4996]: I0228 09:31:42.685254 4996 scope.go:117] "RemoveContainer" containerID="e8b814560f894ad945f8a946d5bfa8b6865ca2c9499f785523f6072478737d52" Feb 28 09:31:42 crc kubenswrapper[4996]: I0228 09:31:42.729836 4996 scope.go:117] "RemoveContainer" containerID="6d747f3601b10c7447e568d9f42aa934e141da09f4b0ede9f8ba8455a64884f4" Feb 28 09:31:42 crc kubenswrapper[4996]: I0228 09:31:42.779275 4996 scope.go:117] "RemoveContainer" containerID="2baeee7b0efc41ff490010fdf5d143bc67ba41a430fb206f55c62d62da225a4e" Feb 28 09:31:42 crc kubenswrapper[4996]: I0228 09:31:42.818325 4996 scope.go:117] "RemoveContainer" containerID="a2f4f879a94ae89682e11bd2bcdf5ab11880c35ef1c3ac08f474201f62261270" Feb 28 09:31:49 crc kubenswrapper[4996]: I0228 09:31:49.034869 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:31:49 crc kubenswrapper[4996]: E0228 09:31:49.035887 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:31:49 crc kubenswrapper[4996]: I0228 09:31:49.415601 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" event={"ID":"55c25f7f-b29b-4504-85d7-e2be62f5ed22","Type":"ContainerDied","Data":"5859032b9970d101704611da5d31cb80a4aa6bf342810fbb8db26762e8d83386"} Feb 28 09:31:49 crc kubenswrapper[4996]: I0228 09:31:49.415667 4996 generic.go:334] "Generic (PLEG): container finished" podID="55c25f7f-b29b-4504-85d7-e2be62f5ed22" containerID="5859032b9970d101704611da5d31cb80a4aa6bf342810fbb8db26762e8d83386" exitCode=0 Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.849978 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.867282 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-ssh-key-openstack-edpm-ipam\") pod \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.867347 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-inventory\") pod \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.867385 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw7fg\" (UniqueName: \"kubernetes.io/projected/55c25f7f-b29b-4504-85d7-e2be62f5ed22-kube-api-access-pw7fg\") pod \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\" (UID: \"55c25f7f-b29b-4504-85d7-e2be62f5ed22\") " Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.874179 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55c25f7f-b29b-4504-85d7-e2be62f5ed22-kube-api-access-pw7fg" (OuterVolumeSpecName: "kube-api-access-pw7fg") pod "55c25f7f-b29b-4504-85d7-e2be62f5ed22" (UID: "55c25f7f-b29b-4504-85d7-e2be62f5ed22"). InnerVolumeSpecName "kube-api-access-pw7fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.890955 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "55c25f7f-b29b-4504-85d7-e2be62f5ed22" (UID: "55c25f7f-b29b-4504-85d7-e2be62f5ed22"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.897184 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-inventory" (OuterVolumeSpecName: "inventory") pod "55c25f7f-b29b-4504-85d7-e2be62f5ed22" (UID: "55c25f7f-b29b-4504-85d7-e2be62f5ed22"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.969538 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.969592 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw7fg\" (UniqueName: \"kubernetes.io/projected/55c25f7f-b29b-4504-85d7-e2be62f5ed22-kube-api-access-pw7fg\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:50 crc kubenswrapper[4996]: I0228 09:31:50.969605 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/55c25f7f-b29b-4504-85d7-e2be62f5ed22-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.437926 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" event={"ID":"55c25f7f-b29b-4504-85d7-e2be62f5ed22","Type":"ContainerDied","Data":"cd4b498983777c6d1e3bf57e7df3ae8c7993c33ca09c5021b967bbea9f781d13"} Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.438489 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4b498983777c6d1e3bf57e7df3ae8c7993c33ca09c5021b967bbea9f781d13" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.438085 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.538838 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt"] Feb 28 09:31:51 crc kubenswrapper[4996]: E0228 09:31:51.539347 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55c25f7f-b29b-4504-85d7-e2be62f5ed22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.539373 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="55c25f7f-b29b-4504-85d7-e2be62f5ed22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.539614 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="55c25f7f-b29b-4504-85d7-e2be62f5ed22" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.540481 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.542387 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.542952 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.543193 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.543214 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.574594 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt"] Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.580289 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.580343 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.580366 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfc5\" (UniqueName: \"kubernetes.io/projected/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-kube-api-access-hlfc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.681748 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.681795 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfc5\" (UniqueName: \"kubernetes.io/projected/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-kube-api-access-hlfc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.681933 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.689913 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.691032 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.702265 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfc5\" (UniqueName: \"kubernetes.io/projected/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-kube-api-access-hlfc5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:51 crc kubenswrapper[4996]: I0228 09:31:51.868057 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:31:52 crc kubenswrapper[4996]: I0228 09:31:52.434765 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt"] Feb 28 09:31:53 crc kubenswrapper[4996]: I0228 09:31:53.454714 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" event={"ID":"392b3689-4b9e-4ce6-a2a0-55fbdc67335d","Type":"ContainerStarted","Data":"539b0a91c44ec04986643c1273cee1d416bbb96b8703cc7c4f33072a87a8b2e3"} Feb 28 09:31:53 crc kubenswrapper[4996]: I0228 09:31:53.455057 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" event={"ID":"392b3689-4b9e-4ce6-a2a0-55fbdc67335d","Type":"ContainerStarted","Data":"7ec5a5029de29bb57d846cd04e59d8c7966094b260a41dd69b3a218436df4e8a"} Feb 28 09:31:53 crc kubenswrapper[4996]: I0228 09:31:53.487143 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" podStartSLOduration=2.003343997 podStartE2EDuration="2.487110187s" podCreationTimestamp="2026-02-28 09:31:51 +0000 UTC" firstStartedPulling="2026-02-28 09:31:52.445868571 +0000 UTC m=+1876.136671392" lastFinishedPulling="2026-02-28 09:31:52.929634761 +0000 UTC m=+1876.620437582" observedRunningTime="2026-02-28 09:31:53.472353617 +0000 UTC m=+1877.163156438" watchObservedRunningTime="2026-02-28 09:31:53.487110187 +0000 UTC m=+1877.177913038" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.034032 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:32:00 crc kubenswrapper[4996]: E0228 09:32:00.035153 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.139947 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537852-6fhtm"] Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.142286 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537852-6fhtm" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.144585 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.145824 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.146077 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.154307 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537852-6fhtm"] Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.268899 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z297r\" (UniqueName: \"kubernetes.io/projected/40fe270d-4b8b-416d-96e0-f863fcd0d969-kube-api-access-z297r\") pod \"auto-csr-approver-29537852-6fhtm\" (UID: \"40fe270d-4b8b-416d-96e0-f863fcd0d969\") " pod="openshift-infra/auto-csr-approver-29537852-6fhtm" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.371917 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z297r\" (UniqueName: \"kubernetes.io/projected/40fe270d-4b8b-416d-96e0-f863fcd0d969-kube-api-access-z297r\") pod \"auto-csr-approver-29537852-6fhtm\" (UID: \"40fe270d-4b8b-416d-96e0-f863fcd0d969\") " pod="openshift-infra/auto-csr-approver-29537852-6fhtm" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.393163 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z297r\" (UniqueName: \"kubernetes.io/projected/40fe270d-4b8b-416d-96e0-f863fcd0d969-kube-api-access-z297r\") pod \"auto-csr-approver-29537852-6fhtm\" (UID: \"40fe270d-4b8b-416d-96e0-f863fcd0d969\") " pod="openshift-infra/auto-csr-approver-29537852-6fhtm" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.504925 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537852-6fhtm" Feb 28 09:32:00 crc kubenswrapper[4996]: I0228 09:32:00.950956 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537852-6fhtm"] Feb 28 09:32:01 crc kubenswrapper[4996]: I0228 09:32:01.528781 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537852-6fhtm" event={"ID":"40fe270d-4b8b-416d-96e0-f863fcd0d969","Type":"ContainerStarted","Data":"33aad20f5073eb867d888ccc163b12d6a6130687bdbb084678be5f62313f8b74"} Feb 28 09:32:02 crc kubenswrapper[4996]: I0228 09:32:02.538034 4996 generic.go:334] "Generic (PLEG): container finished" podID="392b3689-4b9e-4ce6-a2a0-55fbdc67335d" containerID="539b0a91c44ec04986643c1273cee1d416bbb96b8703cc7c4f33072a87a8b2e3" exitCode=0 Feb 28 09:32:02 crc kubenswrapper[4996]: I0228 09:32:02.538100 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" event={"ID":"392b3689-4b9e-4ce6-a2a0-55fbdc67335d","Type":"ContainerDied","Data":"539b0a91c44ec04986643c1273cee1d416bbb96b8703cc7c4f33072a87a8b2e3"} Feb 28 09:32:02 crc kubenswrapper[4996]: I0228 09:32:02.541028 4996 generic.go:334] "Generic (PLEG): container finished" podID="40fe270d-4b8b-416d-96e0-f863fcd0d969" containerID="b4f67f78313a9ca71b2bb4cda1945a33166965c1445002e8ccafd497e1b73a8d" exitCode=0 Feb 28 09:32:02 crc kubenswrapper[4996]: I0228 09:32:02.541073 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537852-6fhtm" event={"ID":"40fe270d-4b8b-416d-96e0-f863fcd0d969","Type":"ContainerDied","Data":"b4f67f78313a9ca71b2bb4cda1945a33166965c1445002e8ccafd497e1b73a8d"} Feb 28 09:32:03 crc kubenswrapper[4996]: I0228 09:32:03.958559 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537852-6fhtm" Feb 28 09:32:03 crc kubenswrapper[4996]: I0228 09:32:03.963694 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.058749 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-ssh-key-openstack-edpm-ipam\") pod \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.058858 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z297r\" (UniqueName: \"kubernetes.io/projected/40fe270d-4b8b-416d-96e0-f863fcd0d969-kube-api-access-z297r\") pod \"40fe270d-4b8b-416d-96e0-f863fcd0d969\" (UID: \"40fe270d-4b8b-416d-96e0-f863fcd0d969\") " Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.058930 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlfc5\" (UniqueName: \"kubernetes.io/projected/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-kube-api-access-hlfc5\") pod \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.058963 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-inventory\") pod \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\" (UID: \"392b3689-4b9e-4ce6-a2a0-55fbdc67335d\") " Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.064720 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-kube-api-access-hlfc5" (OuterVolumeSpecName: "kube-api-access-hlfc5") pod "392b3689-4b9e-4ce6-a2a0-55fbdc67335d" (UID: "392b3689-4b9e-4ce6-a2a0-55fbdc67335d"). InnerVolumeSpecName "kube-api-access-hlfc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.064863 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40fe270d-4b8b-416d-96e0-f863fcd0d969-kube-api-access-z297r" (OuterVolumeSpecName: "kube-api-access-z297r") pod "40fe270d-4b8b-416d-96e0-f863fcd0d969" (UID: "40fe270d-4b8b-416d-96e0-f863fcd0d969"). InnerVolumeSpecName "kube-api-access-z297r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.084894 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-inventory" (OuterVolumeSpecName: "inventory") pod "392b3689-4b9e-4ce6-a2a0-55fbdc67335d" (UID: "392b3689-4b9e-4ce6-a2a0-55fbdc67335d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.089171 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "392b3689-4b9e-4ce6-a2a0-55fbdc67335d" (UID: "392b3689-4b9e-4ce6-a2a0-55fbdc67335d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.160796 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.160832 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z297r\" (UniqueName: \"kubernetes.io/projected/40fe270d-4b8b-416d-96e0-f863fcd0d969-kube-api-access-z297r\") on node \"crc\" DevicePath \"\"" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.160847 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlfc5\" (UniqueName: \"kubernetes.io/projected/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-kube-api-access-hlfc5\") on node \"crc\" DevicePath \"\"" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.160861 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/392b3689-4b9e-4ce6-a2a0-55fbdc67335d-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.567098 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537852-6fhtm" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.567214 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537852-6fhtm" event={"ID":"40fe270d-4b8b-416d-96e0-f863fcd0d969","Type":"ContainerDied","Data":"33aad20f5073eb867d888ccc163b12d6a6130687bdbb084678be5f62313f8b74"} Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.567284 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33aad20f5073eb867d888ccc163b12d6a6130687bdbb084678be5f62313f8b74" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.571082 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" event={"ID":"392b3689-4b9e-4ce6-a2a0-55fbdc67335d","Type":"ContainerDied","Data":"7ec5a5029de29bb57d846cd04e59d8c7966094b260a41dd69b3a218436df4e8a"} Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.571131 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ec5a5029de29bb57d846cd04e59d8c7966094b260a41dd69b3a218436df4e8a" Feb 28 09:32:04 crc kubenswrapper[4996]: I0228 09:32:04.571264 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt" Feb 28 09:32:05 crc kubenswrapper[4996]: I0228 09:32:05.059553 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537846-f58cg"] Feb 28 09:32:05 crc kubenswrapper[4996]: I0228 09:32:05.066589 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537846-f58cg"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.058580 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-pm9cp"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.074129 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-j68mh"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.086953 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-pm9cp"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.096643 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-815f-account-create-update-wg8b6"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.103645 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-j68mh"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.109911 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-9c05-account-create-update-wvb9n"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.116463 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-815f-account-create-update-wg8b6"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.123203 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-9c05-account-create-update-wvb9n"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.129899 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3a07-account-create-update-8d4gz"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.136477 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-bg572"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.142578 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3a07-account-create-update-8d4gz"] Feb 28 09:32:06 crc kubenswrapper[4996]: I0228 09:32:06.150094 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-bg572"] Feb 28 09:32:07 crc kubenswrapper[4996]: I0228 09:32:07.055243 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c5cca9a-4b23-405e-afed-de5776d7a46e" path="/var/lib/kubelet/pods/0c5cca9a-4b23-405e-afed-de5776d7a46e/volumes" Feb 28 09:32:07 crc kubenswrapper[4996]: I0228 09:32:07.056828 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46352d7d-4e62-4a29-8814-a8e2e33ef813" path="/var/lib/kubelet/pods/46352d7d-4e62-4a29-8814-a8e2e33ef813/volumes" Feb 28 09:32:07 crc kubenswrapper[4996]: I0228 09:32:07.057994 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71586534-5889-4027-8688-9b5b3e3394ea" path="/var/lib/kubelet/pods/71586534-5889-4027-8688-9b5b3e3394ea/volumes" Feb 28 09:32:07 crc kubenswrapper[4996]: I0228 09:32:07.059214 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9363ee4e-971c-4a87-9c13-a349d02ac678" path="/var/lib/kubelet/pods/9363ee4e-971c-4a87-9c13-a349d02ac678/volumes" Feb 28 09:32:07 crc kubenswrapper[4996]: I0228 09:32:07.061317 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4bc4b2-0222-49fc-995c-0b809d5e19fe" path="/var/lib/kubelet/pods/9d4bc4b2-0222-49fc-995c-0b809d5e19fe/volumes" Feb 28 09:32:07 crc kubenswrapper[4996]: I0228 09:32:07.062509 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66be9fb-71a8-4d58-8a10-1cbba0fe325f" path="/var/lib/kubelet/pods/a66be9fb-71a8-4d58-8a10-1cbba0fe325f/volumes" Feb 28 09:32:07 crc kubenswrapper[4996]: I0228 09:32:07.063481 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6a7e7c0-e530-41f6-a62e-57f53bb376b8" path="/var/lib/kubelet/pods/c6a7e7c0-e530-41f6-a62e-57f53bb376b8/volumes" Feb 28 09:32:11 crc kubenswrapper[4996]: I0228 09:32:11.032870 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:32:11 crc kubenswrapper[4996]: E0228 09:32:11.033648 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:32:26 crc kubenswrapper[4996]: I0228 09:32:26.032825 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:32:26 crc kubenswrapper[4996]: I0228 09:32:26.839478 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"d3f7c29ff6876349fc77935eade8f1cab613d1ff9df0a3d784466d07cdf7529f"} Feb 28 09:32:32 crc kubenswrapper[4996]: I0228 09:32:32.075097 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwpcz"] Feb 28 09:32:32 crc kubenswrapper[4996]: I0228 09:32:32.091103 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-gwpcz"] Feb 28 09:32:33 crc kubenswrapper[4996]: I0228 09:32:33.051649 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97aca1c-945f-4e22-aa03-667cc7345de5" path="/var/lib/kubelet/pods/d97aca1c-945f-4e22-aa03-667cc7345de5/volumes" Feb 28 09:32:42 crc kubenswrapper[4996]: I0228 09:32:42.963211 4996 scope.go:117] "RemoveContainer" containerID="410d5881214954dcc76ac8c0ce6464581f4d9ad243e263b9cbca79d41f5aa82e" Feb 28 09:32:43 crc kubenswrapper[4996]: I0228 09:32:43.045508 4996 scope.go:117] "RemoveContainer" containerID="65d722f6ed1409211b82ceadb6c0fab219b7eee4e3a6c876a8e17a30bc40a354" Feb 28 09:32:43 crc kubenswrapper[4996]: I0228 09:32:43.092475 4996 scope.go:117] "RemoveContainer" containerID="7c86db71eab66a95cd99adf9766be543240f94c3d2efb6c2c09e8b95b07bfa8c" Feb 28 09:32:43 crc kubenswrapper[4996]: I0228 09:32:43.139831 4996 scope.go:117] "RemoveContainer" containerID="8700d607250633fe65d7d4780548bd3013caf74ed57c222d1ddb17519c45634c" Feb 28 09:32:43 crc kubenswrapper[4996]: I0228 09:32:43.172216 4996 scope.go:117] "RemoveContainer" containerID="94475a765872defb9164e91cdd045fdfdefd13e8f3bf40f3ff1152b0cd9115fd" Feb 28 09:32:43 crc kubenswrapper[4996]: I0228 09:32:43.267265 4996 scope.go:117] "RemoveContainer" containerID="7f8689ef60cb53d8d46d0698dcc448d0c0d425b7cc9d4243f66061c92be146bc" Feb 28 09:32:43 crc kubenswrapper[4996]: I0228 09:32:43.296646 4996 scope.go:117] "RemoveContainer" containerID="68153e18b02681effe53edc7513fee05c8b06db05573f0b6ab718cef7ebd74f6" Feb 28 09:32:43 crc kubenswrapper[4996]: I0228 09:32:43.324786 4996 scope.go:117] "RemoveContainer" containerID="16bdb32eab1059351bc80a579d8edfca3cc2ad8818df424f78c453f17975d860" Feb 28 09:32:51 crc kubenswrapper[4996]: I0228 09:32:51.066316 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-kchvm"] Feb 28 09:32:51 crc kubenswrapper[4996]: I0228 09:32:51.068993 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-kchvm"] Feb 28 09:32:53 crc kubenswrapper[4996]: I0228 09:32:53.054406 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc51c80-ff0d-4bec-80c7-bd45d5e4970d" path="/var/lib/kubelet/pods/7dc51c80-ff0d-4bec-80c7-bd45d5e4970d/volumes" Feb 28 09:32:55 crc kubenswrapper[4996]: I0228 09:32:55.096551 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pf8pr"] Feb 28 09:32:55 crc kubenswrapper[4996]: I0228 09:32:55.104880 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-pf8pr"] Feb 28 09:32:57 crc kubenswrapper[4996]: I0228 09:32:57.061883 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55f51fe-d8cc-47d5-9b5d-29877c65069a" path="/var/lib/kubelet/pods/e55f51fe-d8cc-47d5-9b5d-29877c65069a/volumes" Feb 28 09:33:00 crc kubenswrapper[4996]: I0228 09:33:00.934498 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bws4k"] Feb 28 09:33:00 crc kubenswrapper[4996]: E0228 09:33:00.936975 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40fe270d-4b8b-416d-96e0-f863fcd0d969" containerName="oc" Feb 28 09:33:00 crc kubenswrapper[4996]: I0228 09:33:00.937188 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="40fe270d-4b8b-416d-96e0-f863fcd0d969" containerName="oc" Feb 28 09:33:00 crc kubenswrapper[4996]: E0228 09:33:00.937347 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392b3689-4b9e-4ce6-a2a0-55fbdc67335d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:33:00 crc kubenswrapper[4996]: I0228 09:33:00.937476 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="392b3689-4b9e-4ce6-a2a0-55fbdc67335d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:33:00 crc kubenswrapper[4996]: I0228 09:33:00.940031 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="40fe270d-4b8b-416d-96e0-f863fcd0d969" containerName="oc" Feb 28 09:33:00 crc kubenswrapper[4996]: I0228 09:33:00.940201 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="392b3689-4b9e-4ce6-a2a0-55fbdc67335d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:33:00 crc kubenswrapper[4996]: I0228 09:33:00.943048 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:00 crc kubenswrapper[4996]: I0228 09:33:00.977649 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bws4k"] Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.098913 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc47x\" (UniqueName: \"kubernetes.io/projected/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-kube-api-access-jc47x\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.098986 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-utilities\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.099119 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-catalog-content\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.200787 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc47x\" (UniqueName: \"kubernetes.io/projected/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-kube-api-access-jc47x\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.201205 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-utilities\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.201717 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-utilities\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.201853 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-catalog-content\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.202255 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-catalog-content\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.227309 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc47x\" (UniqueName: \"kubernetes.io/projected/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-kube-api-access-jc47x\") pod \"redhat-operators-bws4k\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.278220 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:01 crc kubenswrapper[4996]: I0228 09:33:01.719913 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bws4k"] Feb 28 09:33:02 crc kubenswrapper[4996]: I0228 09:33:02.228683 4996 generic.go:334] "Generic (PLEG): container finished" podID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerID="1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b" exitCode=0 Feb 28 09:33:02 crc kubenswrapper[4996]: I0228 09:33:02.228729 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bws4k" event={"ID":"446846e0-f018-4dd5-97e3-bc7e6a7f56a4","Type":"ContainerDied","Data":"1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b"} Feb 28 09:33:02 crc kubenswrapper[4996]: I0228 09:33:02.228797 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bws4k" event={"ID":"446846e0-f018-4dd5-97e3-bc7e6a7f56a4","Type":"ContainerStarted","Data":"a0030cb637b211f6d6bd3c7c1e5ec77de280096205ff1f6eb693db9e3cf0ed0c"} Feb 28 09:33:03 crc kubenswrapper[4996]: I0228 09:33:03.244052 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bws4k" event={"ID":"446846e0-f018-4dd5-97e3-bc7e6a7f56a4","Type":"ContainerStarted","Data":"819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86"} Feb 28 09:33:04 crc kubenswrapper[4996]: I0228 09:33:04.259514 4996 generic.go:334] "Generic (PLEG): container finished" podID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerID="819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86" exitCode=0 Feb 28 09:33:04 crc kubenswrapper[4996]: I0228 09:33:04.259558 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bws4k" event={"ID":"446846e0-f018-4dd5-97e3-bc7e6a7f56a4","Type":"ContainerDied","Data":"819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86"} Feb 28 09:33:05 crc kubenswrapper[4996]: I0228 09:33:05.271208 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bws4k" event={"ID":"446846e0-f018-4dd5-97e3-bc7e6a7f56a4","Type":"ContainerStarted","Data":"9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc"} Feb 28 09:33:05 crc kubenswrapper[4996]: I0228 09:33:05.296248 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bws4k" podStartSLOduration=2.703272838 podStartE2EDuration="5.296227161s" podCreationTimestamp="2026-02-28 09:33:00 +0000 UTC" firstStartedPulling="2026-02-28 09:33:02.231267404 +0000 UTC m=+1945.922070215" lastFinishedPulling="2026-02-28 09:33:04.824221717 +0000 UTC m=+1948.515024538" observedRunningTime="2026-02-28 09:33:05.28918057 +0000 UTC m=+1948.979983401" watchObservedRunningTime="2026-02-28 09:33:05.296227161 +0000 UTC m=+1948.987029972" Feb 28 09:33:11 crc kubenswrapper[4996]: I0228 09:33:11.279388 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:11 crc kubenswrapper[4996]: I0228 09:33:11.279947 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:12 crc kubenswrapper[4996]: I0228 09:33:12.340215 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bws4k" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="registry-server" probeResult="failure" output=< Feb 28 09:33:12 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 09:33:12 crc kubenswrapper[4996]: > Feb 28 09:33:21 crc kubenswrapper[4996]: I0228 09:33:21.363339 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:21 crc kubenswrapper[4996]: I0228 09:33:21.435353 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:21 crc kubenswrapper[4996]: I0228 09:33:21.614604 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bws4k"] Feb 28 09:33:22 crc kubenswrapper[4996]: I0228 09:33:22.484426 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bws4k" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="registry-server" containerID="cri-o://9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc" gracePeriod=2 Feb 28 09:33:22 crc kubenswrapper[4996]: I0228 09:33:22.993684 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.136753 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-catalog-content\") pod \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.136911 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc47x\" (UniqueName: \"kubernetes.io/projected/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-kube-api-access-jc47x\") pod \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.137181 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-utilities\") pod \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\" (UID: \"446846e0-f018-4dd5-97e3-bc7e6a7f56a4\") " Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.138923 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-utilities" (OuterVolumeSpecName: "utilities") pod "446846e0-f018-4dd5-97e3-bc7e6a7f56a4" (UID: "446846e0-f018-4dd5-97e3-bc7e6a7f56a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.144652 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-kube-api-access-jc47x" (OuterVolumeSpecName: "kube-api-access-jc47x") pod "446846e0-f018-4dd5-97e3-bc7e6a7f56a4" (UID: "446846e0-f018-4dd5-97e3-bc7e6a7f56a4"). InnerVolumeSpecName "kube-api-access-jc47x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.240056 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc47x\" (UniqueName: \"kubernetes.io/projected/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-kube-api-access-jc47x\") on node \"crc\" DevicePath \"\"" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.240747 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.267709 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "446846e0-f018-4dd5-97e3-bc7e6a7f56a4" (UID: "446846e0-f018-4dd5-97e3-bc7e6a7f56a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.342877 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/446846e0-f018-4dd5-97e3-bc7e6a7f56a4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.496953 4996 generic.go:334] "Generic (PLEG): container finished" podID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerID="9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc" exitCode=0 Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.497028 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bws4k" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.497062 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bws4k" event={"ID":"446846e0-f018-4dd5-97e3-bc7e6a7f56a4","Type":"ContainerDied","Data":"9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc"} Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.497342 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bws4k" event={"ID":"446846e0-f018-4dd5-97e3-bc7e6a7f56a4","Type":"ContainerDied","Data":"a0030cb637b211f6d6bd3c7c1e5ec77de280096205ff1f6eb693db9e3cf0ed0c"} Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.497508 4996 scope.go:117] "RemoveContainer" containerID="9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.534803 4996 scope.go:117] "RemoveContainer" containerID="819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.539363 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bws4k"] Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.551243 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bws4k"] Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.563407 4996 scope.go:117] "RemoveContainer" containerID="1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.613031 4996 scope.go:117] "RemoveContainer" containerID="9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc" Feb 28 09:33:23 crc kubenswrapper[4996]: E0228 09:33:23.613547 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc\": container with ID starting with 9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc not found: ID does not exist" containerID="9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.613609 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc"} err="failed to get container status \"9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc\": rpc error: code = NotFound desc = could not find container \"9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc\": container with ID starting with 9b7a255df4c6d3a2425c4ffc80402ad4a3bdc40c198340ba54bdbe8c57b77ddc not found: ID does not exist" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.613640 4996 scope.go:117] "RemoveContainer" containerID="819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86" Feb 28 09:33:23 crc kubenswrapper[4996]: E0228 09:33:23.614409 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86\": container with ID starting with 819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86 not found: ID does not exist" containerID="819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.614471 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86"} err="failed to get container status \"819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86\": rpc error: code = NotFound desc = could not find container \"819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86\": container with ID starting with 819ff23bd5bab1a5c7eb8ab98abd6b5baf6d99bd27f9b0143b69744ff76d5f86 not found: ID does not exist" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.614503 4996 scope.go:117] "RemoveContainer" containerID="1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b" Feb 28 09:33:23 crc kubenswrapper[4996]: E0228 09:33:23.614944 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b\": container with ID starting with 1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b not found: ID does not exist" containerID="1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b" Feb 28 09:33:23 crc kubenswrapper[4996]: I0228 09:33:23.614991 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b"} err="failed to get container status \"1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b\": rpc error: code = NotFound desc = could not find container \"1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b\": container with ID starting with 1b8220becd361d51ce4b50469ba02d6c0889e4a4a8bf2527223afac595b9e47b not found: ID does not exist" Feb 28 09:33:25 crc kubenswrapper[4996]: I0228 09:33:25.058060 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" path="/var/lib/kubelet/pods/446846e0-f018-4dd5-97e3-bc7e6a7f56a4/volumes" Feb 28 09:33:36 crc kubenswrapper[4996]: I0228 09:33:36.044367 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xwlrs"] Feb 28 09:33:36 crc kubenswrapper[4996]: I0228 09:33:36.052088 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xwlrs"] Feb 28 09:33:37 crc kubenswrapper[4996]: I0228 09:33:37.050117 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aca7df8-ff8f-457d-b65b-fba7e0eed249" path="/var/lib/kubelet/pods/2aca7df8-ff8f-457d-b65b-fba7e0eed249/volumes" Feb 28 09:33:43 crc kubenswrapper[4996]: I0228 09:33:43.491279 4996 scope.go:117] "RemoveContainer" containerID="9b768e5c3391e0ec2c1b8b1272e8e8ba9cf6b9c4c4eb17a944228747404439b6" Feb 28 09:33:43 crc kubenswrapper[4996]: I0228 09:33:43.552342 4996 scope.go:117] "RemoveContainer" containerID="fd159c64ce4b593bdda85f84be1e153076f047d334a404cb0343cdf5a6a1446c" Feb 28 09:33:43 crc kubenswrapper[4996]: I0228 09:33:43.591881 4996 scope.go:117] "RemoveContainer" containerID="f5e5764fc2eee7b2ce75f276442d69f89e72ad38ff1431413ae64a2494e44e5a" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.140447 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537854-hkcxw"] Feb 28 09:34:00 crc kubenswrapper[4996]: E0228 09:34:00.141262 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="extract-content" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.141276 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="extract-content" Feb 28 09:34:00 crc kubenswrapper[4996]: E0228 09:34:00.141307 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="extract-utilities" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.141313 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="extract-utilities" Feb 28 09:34:00 crc kubenswrapper[4996]: E0228 09:34:00.141325 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="registry-server" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.141331 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="registry-server" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.141494 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="446846e0-f018-4dd5-97e3-bc7e6a7f56a4" containerName="registry-server" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.142103 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537854-hkcxw" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.145308 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.145319 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.147386 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.152336 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537854-hkcxw"] Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.173177 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9hxk\" (UniqueName: \"kubernetes.io/projected/02749d81-6850-4494-9008-f509990982ce-kube-api-access-w9hxk\") pod \"auto-csr-approver-29537854-hkcxw\" (UID: \"02749d81-6850-4494-9008-f509990982ce\") " pod="openshift-infra/auto-csr-approver-29537854-hkcxw" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.275420 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9hxk\" (UniqueName: \"kubernetes.io/projected/02749d81-6850-4494-9008-f509990982ce-kube-api-access-w9hxk\") pod \"auto-csr-approver-29537854-hkcxw\" (UID: \"02749d81-6850-4494-9008-f509990982ce\") " pod="openshift-infra/auto-csr-approver-29537854-hkcxw" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.299744 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9hxk\" (UniqueName: \"kubernetes.io/projected/02749d81-6850-4494-9008-f509990982ce-kube-api-access-w9hxk\") pod \"auto-csr-approver-29537854-hkcxw\" (UID: \"02749d81-6850-4494-9008-f509990982ce\") " pod="openshift-infra/auto-csr-approver-29537854-hkcxw" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.459049 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537854-hkcxw" Feb 28 09:34:00 crc kubenswrapper[4996]: I0228 09:34:00.965679 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537854-hkcxw"] Feb 28 09:34:01 crc kubenswrapper[4996]: I0228 09:34:01.885107 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537854-hkcxw" event={"ID":"02749d81-6850-4494-9008-f509990982ce","Type":"ContainerStarted","Data":"99d9d0063c6a335641327ddb6df1d8636d18e14a6d0fc492be77e0337b8ab04b"} Feb 28 09:34:02 crc kubenswrapper[4996]: I0228 09:34:02.894737 4996 generic.go:334] "Generic (PLEG): container finished" podID="02749d81-6850-4494-9008-f509990982ce" containerID="7cff99767df48362bf8652d3732ca014cddc1dea97853b80bfb8b28b8183b55f" exitCode=0 Feb 28 09:34:02 crc kubenswrapper[4996]: I0228 09:34:02.894817 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537854-hkcxw" event={"ID":"02749d81-6850-4494-9008-f509990982ce","Type":"ContainerDied","Data":"7cff99767df48362bf8652d3732ca014cddc1dea97853b80bfb8b28b8183b55f"} Feb 28 09:34:04 crc kubenswrapper[4996]: I0228 09:34:04.264093 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537854-hkcxw" Feb 28 09:34:04 crc kubenswrapper[4996]: I0228 09:34:04.355140 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9hxk\" (UniqueName: \"kubernetes.io/projected/02749d81-6850-4494-9008-f509990982ce-kube-api-access-w9hxk\") pod \"02749d81-6850-4494-9008-f509990982ce\" (UID: \"02749d81-6850-4494-9008-f509990982ce\") " Feb 28 09:34:04 crc kubenswrapper[4996]: I0228 09:34:04.359893 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02749d81-6850-4494-9008-f509990982ce-kube-api-access-w9hxk" (OuterVolumeSpecName: "kube-api-access-w9hxk") pod "02749d81-6850-4494-9008-f509990982ce" (UID: "02749d81-6850-4494-9008-f509990982ce"). InnerVolumeSpecName "kube-api-access-w9hxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:34:04 crc kubenswrapper[4996]: I0228 09:34:04.457704 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9hxk\" (UniqueName: \"kubernetes.io/projected/02749d81-6850-4494-9008-f509990982ce-kube-api-access-w9hxk\") on node \"crc\" DevicePath \"\"" Feb 28 09:34:04 crc kubenswrapper[4996]: I0228 09:34:04.914559 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537854-hkcxw" event={"ID":"02749d81-6850-4494-9008-f509990982ce","Type":"ContainerDied","Data":"99d9d0063c6a335641327ddb6df1d8636d18e14a6d0fc492be77e0337b8ab04b"} Feb 28 09:34:04 crc kubenswrapper[4996]: I0228 09:34:04.914608 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d9d0063c6a335641327ddb6df1d8636d18e14a6d0fc492be77e0337b8ab04b" Feb 28 09:34:04 crc kubenswrapper[4996]: I0228 09:34:04.914667 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537854-hkcxw" Feb 28 09:34:05 crc kubenswrapper[4996]: I0228 09:34:05.345154 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537848-sdf7l"] Feb 28 09:34:05 crc kubenswrapper[4996]: I0228 09:34:05.353285 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537848-sdf7l"] Feb 28 09:34:07 crc kubenswrapper[4996]: I0228 09:34:07.046156 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42239bf7-0b90-45ed-9624-4e1e4016d118" path="/var/lib/kubelet/pods/42239bf7-0b90-45ed-9624-4e1e4016d118/volumes" Feb 28 09:34:42 crc kubenswrapper[4996]: I0228 09:34:42.249470 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:34:42 crc kubenswrapper[4996]: I0228 09:34:42.250140 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:34:43 crc kubenswrapper[4996]: I0228 09:34:43.692068 4996 scope.go:117] "RemoveContainer" containerID="c0c32b8edb9eacff2735b0b87ed9522d39ce55675569ff5a1888306db7c967f8" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.698192 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jgst4"] Feb 28 09:34:53 crc kubenswrapper[4996]: E0228 09:34:53.699136 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02749d81-6850-4494-9008-f509990982ce" containerName="oc" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.699152 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="02749d81-6850-4494-9008-f509990982ce" containerName="oc" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.699367 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="02749d81-6850-4494-9008-f509990982ce" containerName="oc" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.700866 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.711033 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgst4"] Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.739070 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2xc6\" (UniqueName: \"kubernetes.io/projected/b194f38f-0140-4999-8fa4-78603ca43f35-kube-api-access-s2xc6\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.739601 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-utilities\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.739754 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-catalog-content\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.841864 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2xc6\" (UniqueName: \"kubernetes.io/projected/b194f38f-0140-4999-8fa4-78603ca43f35-kube-api-access-s2xc6\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.842339 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-utilities\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.842490 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-catalog-content\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.842801 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-utilities\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.842912 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-catalog-content\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:53 crc kubenswrapper[4996]: I0228 09:34:53.861919 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2xc6\" (UniqueName: \"kubernetes.io/projected/b194f38f-0140-4999-8fa4-78603ca43f35-kube-api-access-s2xc6\") pod \"certified-operators-jgst4\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:54 crc kubenswrapper[4996]: I0228 09:34:54.035656 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:34:54 crc kubenswrapper[4996]: I0228 09:34:54.567681 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgst4"] Feb 28 09:34:55 crc kubenswrapper[4996]: I0228 09:34:55.398683 4996 generic.go:334] "Generic (PLEG): container finished" podID="b194f38f-0140-4999-8fa4-78603ca43f35" containerID="efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2" exitCode=0 Feb 28 09:34:55 crc kubenswrapper[4996]: I0228 09:34:55.398789 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgst4" event={"ID":"b194f38f-0140-4999-8fa4-78603ca43f35","Type":"ContainerDied","Data":"efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2"} Feb 28 09:34:55 crc kubenswrapper[4996]: I0228 09:34:55.398984 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgst4" event={"ID":"b194f38f-0140-4999-8fa4-78603ca43f35","Type":"ContainerStarted","Data":"b2071eb27a10bcd0fb3259375a5c45a95e0fedc41a2cf785d389e593a7b9ced1"} Feb 28 09:34:56 crc kubenswrapper[4996]: I0228 09:34:56.411825 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgst4" event={"ID":"b194f38f-0140-4999-8fa4-78603ca43f35","Type":"ContainerStarted","Data":"cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf"} Feb 28 09:34:57 crc kubenswrapper[4996]: I0228 09:34:57.425731 4996 generic.go:334] "Generic (PLEG): container finished" podID="b194f38f-0140-4999-8fa4-78603ca43f35" containerID="cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf" exitCode=0 Feb 28 09:34:57 crc kubenswrapper[4996]: I0228 09:34:57.425801 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgst4" event={"ID":"b194f38f-0140-4999-8fa4-78603ca43f35","Type":"ContainerDied","Data":"cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf"} Feb 28 09:34:58 crc kubenswrapper[4996]: I0228 09:34:58.437126 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgst4" event={"ID":"b194f38f-0140-4999-8fa4-78603ca43f35","Type":"ContainerStarted","Data":"1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57"} Feb 28 09:34:58 crc kubenswrapper[4996]: I0228 09:34:58.468370 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jgst4" podStartSLOduration=3.002059857 podStartE2EDuration="5.468353793s" podCreationTimestamp="2026-02-28 09:34:53 +0000 UTC" firstStartedPulling="2026-02-28 09:34:55.401191695 +0000 UTC m=+2059.091994506" lastFinishedPulling="2026-02-28 09:34:57.867485621 +0000 UTC m=+2061.558288442" observedRunningTime="2026-02-28 09:34:58.461052455 +0000 UTC m=+2062.151855286" watchObservedRunningTime="2026-02-28 09:34:58.468353793 +0000 UTC m=+2062.159156594" Feb 28 09:35:04 crc kubenswrapper[4996]: I0228 09:35:04.036258 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:35:04 crc kubenswrapper[4996]: I0228 09:35:04.037069 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:35:04 crc kubenswrapper[4996]: I0228 09:35:04.106339 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:35:04 crc kubenswrapper[4996]: I0228 09:35:04.565578 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:35:04 crc kubenswrapper[4996]: I0228 09:35:04.631963 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgst4"] Feb 28 09:35:06 crc kubenswrapper[4996]: I0228 09:35:06.518548 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jgst4" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" containerName="registry-server" containerID="cri-o://1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57" gracePeriod=2 Feb 28 09:35:06 crc kubenswrapper[4996]: I0228 09:35:06.965045 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.097498 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-utilities\") pod \"b194f38f-0140-4999-8fa4-78603ca43f35\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.098156 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2xc6\" (UniqueName: \"kubernetes.io/projected/b194f38f-0140-4999-8fa4-78603ca43f35-kube-api-access-s2xc6\") pod \"b194f38f-0140-4999-8fa4-78603ca43f35\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.098200 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-catalog-content\") pod \"b194f38f-0140-4999-8fa4-78603ca43f35\" (UID: \"b194f38f-0140-4999-8fa4-78603ca43f35\") " Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.099471 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-utilities" (OuterVolumeSpecName: "utilities") pod "b194f38f-0140-4999-8fa4-78603ca43f35" (UID: "b194f38f-0140-4999-8fa4-78603ca43f35"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.104462 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b194f38f-0140-4999-8fa4-78603ca43f35-kube-api-access-s2xc6" (OuterVolumeSpecName: "kube-api-access-s2xc6") pod "b194f38f-0140-4999-8fa4-78603ca43f35" (UID: "b194f38f-0140-4999-8fa4-78603ca43f35"). InnerVolumeSpecName "kube-api-access-s2xc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.149742 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b194f38f-0140-4999-8fa4-78603ca43f35" (UID: "b194f38f-0140-4999-8fa4-78603ca43f35"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.202046 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2xc6\" (UniqueName: \"kubernetes.io/projected/b194f38f-0140-4999-8fa4-78603ca43f35-kube-api-access-s2xc6\") on node \"crc\" DevicePath \"\"" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.202085 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.202098 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b194f38f-0140-4999-8fa4-78603ca43f35-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.548813 4996 generic.go:334] "Generic (PLEG): container finished" podID="b194f38f-0140-4999-8fa4-78603ca43f35" containerID="1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57" exitCode=0 Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.548855 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgst4" event={"ID":"b194f38f-0140-4999-8fa4-78603ca43f35","Type":"ContainerDied","Data":"1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57"} Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.548879 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgst4" event={"ID":"b194f38f-0140-4999-8fa4-78603ca43f35","Type":"ContainerDied","Data":"b2071eb27a10bcd0fb3259375a5c45a95e0fedc41a2cf785d389e593a7b9ced1"} Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.548893 4996 scope.go:117] "RemoveContainer" containerID="1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.548858 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgst4" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.580520 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgst4"] Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.586478 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jgst4"] Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.589625 4996 scope.go:117] "RemoveContainer" containerID="cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.605324 4996 scope.go:117] "RemoveContainer" containerID="efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.644922 4996 scope.go:117] "RemoveContainer" containerID="1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57" Feb 28 09:35:07 crc kubenswrapper[4996]: E0228 09:35:07.645638 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57\": container with ID starting with 1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57 not found: ID does not exist" containerID="1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.645675 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57"} err="failed to get container status \"1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57\": rpc error: code = NotFound desc = could not find container \"1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57\": container with ID starting with 1da0d313f2f3fdc1be09d3b21dbfe4ac6d08d97788875e572b45ab9f3f5bfd57 not found: ID does not exist" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.645700 4996 scope.go:117] "RemoveContainer" containerID="cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf" Feb 28 09:35:07 crc kubenswrapper[4996]: E0228 09:35:07.645983 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf\": container with ID starting with cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf not found: ID does not exist" containerID="cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.646029 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf"} err="failed to get container status \"cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf\": rpc error: code = NotFound desc = could not find container \"cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf\": container with ID starting with cf5985a1ffed3db6cc69156e97d73cdb5f87c80de3e044cdc09d8cc095f66daf not found: ID does not exist" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.646042 4996 scope.go:117] "RemoveContainer" containerID="efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2" Feb 28 09:35:07 crc kubenswrapper[4996]: E0228 09:35:07.646406 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2\": container with ID starting with efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2 not found: ID does not exist" containerID="efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2" Feb 28 09:35:07 crc kubenswrapper[4996]: I0228 09:35:07.646453 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2"} err="failed to get container status \"efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2\": rpc error: code = NotFound desc = could not find container \"efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2\": container with ID starting with efc663fcf69559409e08d5d63d7e2f406ba050c6ec3d6e2120638dab393382f2 not found: ID does not exist" Feb 28 09:35:09 crc kubenswrapper[4996]: I0228 09:35:09.047717 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" path="/var/lib/kubelet/pods/b194f38f-0140-4999-8fa4-78603ca43f35/volumes" Feb 28 09:35:12 crc kubenswrapper[4996]: I0228 09:35:12.251054 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:35:12 crc kubenswrapper[4996]: I0228 09:35:12.251401 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:35:24 crc kubenswrapper[4996]: E0228 09:35:24.507435 4996 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.9:59816->38.102.83.9:39449: write tcp 38.102.83.9:59816->38.102.83.9:39449: write: broken pipe Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.249508 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.250281 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.250357 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.251479 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3f7c29ff6876349fc77935eade8f1cab613d1ff9df0a3d784466d07cdf7529f"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.251588 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://d3f7c29ff6876349fc77935eade8f1cab613d1ff9df0a3d784466d07cdf7529f" gracePeriod=600 Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.909794 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="d3f7c29ff6876349fc77935eade8f1cab613d1ff9df0a3d784466d07cdf7529f" exitCode=0 Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.909939 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"d3f7c29ff6876349fc77935eade8f1cab613d1ff9df0a3d784466d07cdf7529f"} Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.910799 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee"} Feb 28 09:35:42 crc kubenswrapper[4996]: I0228 09:35:42.910839 4996 scope.go:117] "RemoveContainer" containerID="091a990f29b5525a0dfe5cc023c6429f0a3f1bf301400b8fd98b4abc861ee18e" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.139580 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537856-qt2w5"] Feb 28 09:36:00 crc kubenswrapper[4996]: E0228 09:36:00.140546 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" containerName="extract-content" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.140563 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" containerName="extract-content" Feb 28 09:36:00 crc kubenswrapper[4996]: E0228 09:36:00.140605 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" containerName="extract-utilities" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.140614 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" containerName="extract-utilities" Feb 28 09:36:00 crc kubenswrapper[4996]: E0228 09:36:00.140639 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" containerName="registry-server" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.140646 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" containerName="registry-server" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.140959 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b194f38f-0140-4999-8fa4-78603ca43f35" containerName="registry-server" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.141964 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.144661 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.144664 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.145521 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.150426 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537856-qt2w5"] Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.320066 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrl5v\" (UniqueName: \"kubernetes.io/projected/4b5952ed-8b64-4873-a4e3-688e1a36be1f-kube-api-access-wrl5v\") pod \"auto-csr-approver-29537856-qt2w5\" (UID: \"4b5952ed-8b64-4873-a4e3-688e1a36be1f\") " pod="openshift-infra/auto-csr-approver-29537856-qt2w5" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.422095 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrl5v\" (UniqueName: \"kubernetes.io/projected/4b5952ed-8b64-4873-a4e3-688e1a36be1f-kube-api-access-wrl5v\") pod \"auto-csr-approver-29537856-qt2w5\" (UID: \"4b5952ed-8b64-4873-a4e3-688e1a36be1f\") " pod="openshift-infra/auto-csr-approver-29537856-qt2w5" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.445125 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrl5v\" (UniqueName: \"kubernetes.io/projected/4b5952ed-8b64-4873-a4e3-688e1a36be1f-kube-api-access-wrl5v\") pod \"auto-csr-approver-29537856-qt2w5\" (UID: \"4b5952ed-8b64-4873-a4e3-688e1a36be1f\") " pod="openshift-infra/auto-csr-approver-29537856-qt2w5" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.470555 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" Feb 28 09:36:00 crc kubenswrapper[4996]: I0228 09:36:00.940689 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537856-qt2w5"] Feb 28 09:36:01 crc kubenswrapper[4996]: I0228 09:36:01.102852 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" event={"ID":"4b5952ed-8b64-4873-a4e3-688e1a36be1f","Type":"ContainerStarted","Data":"87f806987026b3ef98f196cd37f1d7728a64dca3c3d17baf2d9a1683362fd740"} Feb 28 09:36:02 crc kubenswrapper[4996]: I0228 09:36:02.119363 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" event={"ID":"4b5952ed-8b64-4873-a4e3-688e1a36be1f","Type":"ContainerStarted","Data":"0c643a82282948ae29b5455d135f16150feebdc6383f0c418d8718c6a62782c1"} Feb 28 09:36:02 crc kubenswrapper[4996]: I0228 09:36:02.152433 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" podStartSLOduration=1.389432007 podStartE2EDuration="2.152400901s" podCreationTimestamp="2026-02-28 09:36:00 +0000 UTC" firstStartedPulling="2026-02-28 09:36:00.961720736 +0000 UTC m=+2124.652523547" lastFinishedPulling="2026-02-28 09:36:01.72468963 +0000 UTC m=+2125.415492441" observedRunningTime="2026-02-28 09:36:02.144284623 +0000 UTC m=+2125.835087504" watchObservedRunningTime="2026-02-28 09:36:02.152400901 +0000 UTC m=+2125.843203782" Feb 28 09:36:03 crc kubenswrapper[4996]: I0228 09:36:03.129748 4996 generic.go:334] "Generic (PLEG): container finished" podID="4b5952ed-8b64-4873-a4e3-688e1a36be1f" containerID="0c643a82282948ae29b5455d135f16150feebdc6383f0c418d8718c6a62782c1" exitCode=0 Feb 28 09:36:03 crc kubenswrapper[4996]: I0228 09:36:03.129865 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" event={"ID":"4b5952ed-8b64-4873-a4e3-688e1a36be1f","Type":"ContainerDied","Data":"0c643a82282948ae29b5455d135f16150feebdc6383f0c418d8718c6a62782c1"} Feb 28 09:36:04 crc kubenswrapper[4996]: I0228 09:36:04.489983 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" Feb 28 09:36:04 crc kubenswrapper[4996]: I0228 09:36:04.498734 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrl5v\" (UniqueName: \"kubernetes.io/projected/4b5952ed-8b64-4873-a4e3-688e1a36be1f-kube-api-access-wrl5v\") pod \"4b5952ed-8b64-4873-a4e3-688e1a36be1f\" (UID: \"4b5952ed-8b64-4873-a4e3-688e1a36be1f\") " Feb 28 09:36:04 crc kubenswrapper[4996]: I0228 09:36:04.514240 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5952ed-8b64-4873-a4e3-688e1a36be1f-kube-api-access-wrl5v" (OuterVolumeSpecName: "kube-api-access-wrl5v") pod "4b5952ed-8b64-4873-a4e3-688e1a36be1f" (UID: "4b5952ed-8b64-4873-a4e3-688e1a36be1f"). InnerVolumeSpecName "kube-api-access-wrl5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:36:04 crc kubenswrapper[4996]: I0228 09:36:04.600309 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrl5v\" (UniqueName: \"kubernetes.io/projected/4b5952ed-8b64-4873-a4e3-688e1a36be1f-kube-api-access-wrl5v\") on node \"crc\" DevicePath \"\"" Feb 28 09:36:05 crc kubenswrapper[4996]: I0228 09:36:05.148129 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" event={"ID":"4b5952ed-8b64-4873-a4e3-688e1a36be1f","Type":"ContainerDied","Data":"87f806987026b3ef98f196cd37f1d7728a64dca3c3d17baf2d9a1683362fd740"} Feb 28 09:36:05 crc kubenswrapper[4996]: I0228 09:36:05.148661 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f806987026b3ef98f196cd37f1d7728a64dca3c3d17baf2d9a1683362fd740" Feb 28 09:36:05 crc kubenswrapper[4996]: I0228 09:36:05.148181 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537856-qt2w5" Feb 28 09:36:05 crc kubenswrapper[4996]: I0228 09:36:05.214750 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537850-rllwb"] Feb 28 09:36:05 crc kubenswrapper[4996]: I0228 09:36:05.222046 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537850-rllwb"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.057312 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5026f9bd-e6a2-4b42-b7cb-eefed7e5a187" path="/var/lib/kubelet/pods/5026f9bd-e6a2-4b42-b7cb-eefed7e5a187/volumes" Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.302652 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.315617 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.325999 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g2gd4"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.334409 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.342889 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-f9czp"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.350171 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-87fzt"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.356565 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-g2gd4"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.362414 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rh9r6"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.367859 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.373195 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-2kc8w"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.378675 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.384681 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.391273 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-5cp2j"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.397635 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-znw96"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.403163 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.409113 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-fkdjf"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.414454 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.419941 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.425642 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-sklq6"] Feb 28 09:36:07 crc kubenswrapper[4996]: I0228 09:36:07.430725 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-v2d7s"] Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.062278 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17f535e2-b6a2-40de-bf63-160b7aeb3b70" path="/var/lib/kubelet/pods/17f535e2-b6a2-40de-bf63-160b7aeb3b70/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.063670 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="187619a6-6bc6-4fce-a88e-f13ccf565e4d" path="/var/lib/kubelet/pods/187619a6-6bc6-4fce-a88e-f13ccf565e4d/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.064746 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392b3689-4b9e-4ce6-a2a0-55fbdc67335d" path="/var/lib/kubelet/pods/392b3689-4b9e-4ce6-a2a0-55fbdc67335d/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.065913 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4894e3d4-6f41-4761-974a-7a150702e852" path="/var/lib/kubelet/pods/4894e3d4-6f41-4761-974a-7a150702e852/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.068781 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f511eb-b1e0-4c7a-a26d-49fad3305cee" path="/var/lib/kubelet/pods/49f511eb-b1e0-4c7a-a26d-49fad3305cee/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.069868 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55c25f7f-b29b-4504-85d7-e2be62f5ed22" path="/var/lib/kubelet/pods/55c25f7f-b29b-4504-85d7-e2be62f5ed22/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.071043 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a18a047-3d57-467d-a116-6ccd83c7b54a" path="/var/lib/kubelet/pods/5a18a047-3d57-467d-a116-6ccd83c7b54a/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.073261 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a28e9ec-760a-4b1a-93ec-bdca318ebe00" path="/var/lib/kubelet/pods/6a28e9ec-760a-4b1a-93ec-bdca318ebe00/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.074314 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738a9376-79b4-4611-b57c-baf13a1899fd" path="/var/lib/kubelet/pods/738a9376-79b4-4611-b57c-baf13a1899fd/volumes" Feb 28 09:36:09 crc kubenswrapper[4996]: I0228 09:36:09.076241 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e9c1df-d012-45b5-8315-0c8c14d680d2" path="/var/lib/kubelet/pods/f7e9c1df-d012-45b5-8315-0c8c14d680d2/volumes" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.376072 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf"] Feb 28 09:36:13 crc kubenswrapper[4996]: E0228 09:36:13.376769 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5952ed-8b64-4873-a4e3-688e1a36be1f" containerName="oc" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.376780 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5952ed-8b64-4873-a4e3-688e1a36be1f" containerName="oc" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.376935 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5952ed-8b64-4873-a4e3-688e1a36be1f" containerName="oc" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.377485 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.381064 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.381093 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.385092 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.385529 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.386359 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.390149 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf"] Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.466964 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.467061 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdqd\" (UniqueName: \"kubernetes.io/projected/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-kube-api-access-drdqd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.467150 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.467198 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.467248 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.569199 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.569268 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdqd\" (UniqueName: \"kubernetes.io/projected/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-kube-api-access-drdqd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.569335 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.569374 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.569418 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.577462 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.578289 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.578862 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.579710 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.600993 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdqd\" (UniqueName: \"kubernetes.io/projected/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-kube-api-access-drdqd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:13 crc kubenswrapper[4996]: I0228 09:36:13.692397 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:14 crc kubenswrapper[4996]: I0228 09:36:14.229471 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf"] Feb 28 09:36:15 crc kubenswrapper[4996]: I0228 09:36:15.244545 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" event={"ID":"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6","Type":"ContainerStarted","Data":"ea88082bc537beff79e3d2ac36dc9bf3fd5467beb6182d865fce97b5cb3c9557"} Feb 28 09:36:15 crc kubenswrapper[4996]: I0228 09:36:15.244934 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" event={"ID":"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6","Type":"ContainerStarted","Data":"5ee82845f27942572e878c65765461d52fc86ea8cfad070c5b5af0e2912d6404"} Feb 28 09:36:15 crc kubenswrapper[4996]: I0228 09:36:15.276804 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" podStartSLOduration=1.876042279 podStartE2EDuration="2.276778181s" podCreationTimestamp="2026-02-28 09:36:13 +0000 UTC" firstStartedPulling="2026-02-28 09:36:14.238422629 +0000 UTC m=+2137.929225440" lastFinishedPulling="2026-02-28 09:36:14.639158531 +0000 UTC m=+2138.329961342" observedRunningTime="2026-02-28 09:36:15.270261722 +0000 UTC m=+2138.961064603" watchObservedRunningTime="2026-02-28 09:36:15.276778181 +0000 UTC m=+2138.967581032" Feb 28 09:36:26 crc kubenswrapper[4996]: I0228 09:36:26.351661 4996 generic.go:334] "Generic (PLEG): container finished" podID="6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" containerID="ea88082bc537beff79e3d2ac36dc9bf3fd5467beb6182d865fce97b5cb3c9557" exitCode=0 Feb 28 09:36:26 crc kubenswrapper[4996]: I0228 09:36:26.351740 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" event={"ID":"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6","Type":"ContainerDied","Data":"ea88082bc537beff79e3d2ac36dc9bf3fd5467beb6182d865fce97b5cb3c9557"} Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.808863 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.950122 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-inventory\") pod \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.950289 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drdqd\" (UniqueName: \"kubernetes.io/projected/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-kube-api-access-drdqd\") pod \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.950319 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-repo-setup-combined-ca-bundle\") pod \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.950365 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ceph\") pod \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.950446 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ssh-key-openstack-edpm-ipam\") pod \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\" (UID: \"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6\") " Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.955302 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" (UID: "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.957556 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ceph" (OuterVolumeSpecName: "ceph") pod "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" (UID: "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.957634 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-kube-api-access-drdqd" (OuterVolumeSpecName: "kube-api-access-drdqd") pod "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" (UID: "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6"). InnerVolumeSpecName "kube-api-access-drdqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.977640 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-inventory" (OuterVolumeSpecName: "inventory") pod "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" (UID: "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:36:27 crc kubenswrapper[4996]: I0228 09:36:27.982654 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" (UID: "6dbca8bf-95da-4cd5-b57e-d810e5f39ae6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.053381 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.053449 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drdqd\" (UniqueName: \"kubernetes.io/projected/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-kube-api-access-drdqd\") on node \"crc\" DevicePath \"\"" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.053474 4996 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.053496 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.053516 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6dbca8bf-95da-4cd5-b57e-d810e5f39ae6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.369903 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" event={"ID":"6dbca8bf-95da-4cd5-b57e-d810e5f39ae6","Type":"ContainerDied","Data":"5ee82845f27942572e878c65765461d52fc86ea8cfad070c5b5af0e2912d6404"} Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.369952 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ee82845f27942572e878c65765461d52fc86ea8cfad070c5b5af0e2912d6404" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.369988 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.470654 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75"] Feb 28 09:36:28 crc kubenswrapper[4996]: E0228 09:36:28.471053 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.471066 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.471212 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbca8bf-95da-4cd5-b57e-d810e5f39ae6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.471750 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.474462 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.474470 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.474640 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.476287 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.478713 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.539323 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75"] Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.665371 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjdw8\" (UniqueName: \"kubernetes.io/projected/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-kube-api-access-xjdw8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.665440 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.665494 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.665637 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.665679 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.766990 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.767439 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjdw8\" (UniqueName: \"kubernetes.io/projected/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-kube-api-access-xjdw8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.767546 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.768077 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.768205 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.772252 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.772265 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.772754 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.772859 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.784204 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjdw8\" (UniqueName: \"kubernetes.io/projected/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-kube-api-access-xjdw8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:28 crc kubenswrapper[4996]: I0228 09:36:28.836620 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:36:29 crc kubenswrapper[4996]: I0228 09:36:29.396569 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75"] Feb 28 09:36:29 crc kubenswrapper[4996]: I0228 09:36:29.408166 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:36:30 crc kubenswrapper[4996]: I0228 09:36:30.388793 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" event={"ID":"d0a3e2dd-04e0-4625-b69c-6fddf875deeb","Type":"ContainerStarted","Data":"975bf171fbdd714c7a0c8aaeebc6748b7c5a1c81fbb04940fd72b4062d178524"} Feb 28 09:36:30 crc kubenswrapper[4996]: I0228 09:36:30.389131 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" event={"ID":"d0a3e2dd-04e0-4625-b69c-6fddf875deeb","Type":"ContainerStarted","Data":"0946d7953ba46b99c8110d4afd70c21dc1fbea8a4a63941ddd36b16a9f544537"} Feb 28 09:36:30 crc kubenswrapper[4996]: I0228 09:36:30.423302 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" podStartSLOduration=2.006974831 podStartE2EDuration="2.423277113s" podCreationTimestamp="2026-02-28 09:36:28 +0000 UTC" firstStartedPulling="2026-02-28 09:36:29.40781321 +0000 UTC m=+2153.098616021" lastFinishedPulling="2026-02-28 09:36:29.824115482 +0000 UTC m=+2153.514918303" observedRunningTime="2026-02-28 09:36:30.40595808 +0000 UTC m=+2154.096760891" watchObservedRunningTime="2026-02-28 09:36:30.423277113 +0000 UTC m=+2154.114079964" Feb 28 09:36:43 crc kubenswrapper[4996]: I0228 09:36:43.836686 4996 scope.go:117] "RemoveContainer" containerID="94b0046a3617c6f9be1f3afc5a5feb6ea7d694ca88eea2260a4c212d9d7f5224" Feb 28 09:36:43 crc kubenswrapper[4996]: I0228 09:36:43.883687 4996 scope.go:117] "RemoveContainer" containerID="6daaa3ca882eab2bcc77f937ed99605ee5a323616f5dd925c69757ad2b120341" Feb 28 09:36:43 crc kubenswrapper[4996]: I0228 09:36:43.981384 4996 scope.go:117] "RemoveContainer" containerID="58624d7af507025806c26c50d319710894c7ecb76d93f2a00be40d90c0e051be" Feb 28 09:36:44 crc kubenswrapper[4996]: I0228 09:36:44.036937 4996 scope.go:117] "RemoveContainer" containerID="014891e052fad6da40ac1b8051d1ca1459aedeff2b06bccc33d2f037e895be5d" Feb 28 09:36:44 crc kubenswrapper[4996]: I0228 09:36:44.092849 4996 scope.go:117] "RemoveContainer" containerID="a19ee68511e5256776041fab6f42469d148fea0460b2048dc9920b795a9a45d5" Feb 28 09:36:44 crc kubenswrapper[4996]: I0228 09:36:44.155973 4996 scope.go:117] "RemoveContainer" containerID="d3fe8171d55bab7cffdc92f1b7f5fb879812ea41dd09703e106477c649f33c70" Feb 28 09:36:44 crc kubenswrapper[4996]: I0228 09:36:44.205101 4996 scope.go:117] "RemoveContainer" containerID="158379b8007b9195d272c65da092c766ed4296f7fc08dae685731b1c77d4ea5c" Feb 28 09:36:44 crc kubenswrapper[4996]: I0228 09:36:44.256680 4996 scope.go:117] "RemoveContainer" containerID="05619ef064203ce8ae9d29be8ef07bc7f52c07f17927f3e11b01f490217c1834" Feb 28 09:37:42 crc kubenswrapper[4996]: I0228 09:37:42.248550 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:37:42 crc kubenswrapper[4996]: I0228 09:37:42.249232 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:37:44 crc kubenswrapper[4996]: I0228 09:37:44.530873 4996 scope.go:117] "RemoveContainer" containerID="5859032b9970d101704611da5d31cb80a4aa6bf342810fbb8db26762e8d83386" Feb 28 09:37:44 crc kubenswrapper[4996]: I0228 09:37:44.571089 4996 scope.go:117] "RemoveContainer" containerID="c58a800ec77d489e2a360ad16bd16a8e07c187ce0d161d13dd5122ace80012c5" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.330896 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8gpjn"] Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.335819 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.350834 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gpjn"] Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.526078 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-utilities\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.526172 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-catalog-content\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.526255 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgbs7\" (UniqueName: \"kubernetes.io/projected/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-kube-api-access-tgbs7\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.628164 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-catalog-content\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.628252 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgbs7\" (UniqueName: \"kubernetes.io/projected/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-kube-api-access-tgbs7\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.628359 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-utilities\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.628974 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-utilities\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.628991 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-catalog-content\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.648096 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgbs7\" (UniqueName: \"kubernetes.io/projected/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-kube-api-access-tgbs7\") pod \"community-operators-8gpjn\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:55 crc kubenswrapper[4996]: I0228 09:37:55.656543 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:37:56 crc kubenswrapper[4996]: I0228 09:37:56.183771 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gpjn"] Feb 28 09:37:56 crc kubenswrapper[4996]: I0228 09:37:56.255022 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gpjn" event={"ID":"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe","Type":"ContainerStarted","Data":"07dd6a1a693dc8131f7e96e9b30c38bbfa5f8a2126ddab32803590a566f6c646"} Feb 28 09:37:57 crc kubenswrapper[4996]: I0228 09:37:57.265131 4996 generic.go:334] "Generic (PLEG): container finished" podID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerID="715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5" exitCode=0 Feb 28 09:37:57 crc kubenswrapper[4996]: I0228 09:37:57.265201 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gpjn" event={"ID":"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe","Type":"ContainerDied","Data":"715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5"} Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.143918 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-94j57"] Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.146636 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.170657 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94j57"] Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.181294 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnm5q\" (UniqueName: \"kubernetes.io/projected/8d11aa77-e726-43e4-bb35-66a3f93adc85-kube-api-access-vnm5q\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.181395 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-catalog-content\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.181450 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-utilities\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.275823 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gpjn" event={"ID":"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe","Type":"ContainerStarted","Data":"d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5"} Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.282573 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnm5q\" (UniqueName: \"kubernetes.io/projected/8d11aa77-e726-43e4-bb35-66a3f93adc85-kube-api-access-vnm5q\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.282673 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-catalog-content\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.282732 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-utilities\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.283111 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-catalog-content\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.283253 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-utilities\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.305864 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnm5q\" (UniqueName: \"kubernetes.io/projected/8d11aa77-e726-43e4-bb35-66a3f93adc85-kube-api-access-vnm5q\") pod \"redhat-marketplace-94j57\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.468318 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:37:58 crc kubenswrapper[4996]: I0228 09:37:58.938635 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-94j57"] Feb 28 09:37:59 crc kubenswrapper[4996]: I0228 09:37:59.287511 4996 generic.go:334] "Generic (PLEG): container finished" podID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerID="2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0" exitCode=0 Feb 28 09:37:59 crc kubenswrapper[4996]: I0228 09:37:59.287560 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94j57" event={"ID":"8d11aa77-e726-43e4-bb35-66a3f93adc85","Type":"ContainerDied","Data":"2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0"} Feb 28 09:37:59 crc kubenswrapper[4996]: I0228 09:37:59.287921 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94j57" event={"ID":"8d11aa77-e726-43e4-bb35-66a3f93adc85","Type":"ContainerStarted","Data":"f53b457bec009d5e954498ba5237ce1f186809291f9c2ca45c369fb5e9e7f768"} Feb 28 09:37:59 crc kubenswrapper[4996]: I0228 09:37:59.292383 4996 generic.go:334] "Generic (PLEG): container finished" podID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerID="d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5" exitCode=0 Feb 28 09:37:59 crc kubenswrapper[4996]: I0228 09:37:59.292431 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gpjn" event={"ID":"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe","Type":"ContainerDied","Data":"d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5"} Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.155475 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537858-wf7m7"] Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.156966 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.159356 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.159788 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.160428 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.188084 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537858-wf7m7"] Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.227043 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26h4h\" (UniqueName: \"kubernetes.io/projected/dd6fd48a-ad72-48f5-8bb1-06804de608c2-kube-api-access-26h4h\") pod \"auto-csr-approver-29537858-wf7m7\" (UID: \"dd6fd48a-ad72-48f5-8bb1-06804de608c2\") " pod="openshift-infra/auto-csr-approver-29537858-wf7m7" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.305031 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gpjn" event={"ID":"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe","Type":"ContainerStarted","Data":"10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc"} Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.309289 4996 generic.go:334] "Generic (PLEG): container finished" podID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerID="2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004" exitCode=0 Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.309335 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94j57" event={"ID":"8d11aa77-e726-43e4-bb35-66a3f93adc85","Type":"ContainerDied","Data":"2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004"} Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.327947 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26h4h\" (UniqueName: \"kubernetes.io/projected/dd6fd48a-ad72-48f5-8bb1-06804de608c2-kube-api-access-26h4h\") pod \"auto-csr-approver-29537858-wf7m7\" (UID: \"dd6fd48a-ad72-48f5-8bb1-06804de608c2\") " pod="openshift-infra/auto-csr-approver-29537858-wf7m7" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.329088 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8gpjn" podStartSLOduration=2.942090775 podStartE2EDuration="5.329068531s" podCreationTimestamp="2026-02-28 09:37:55 +0000 UTC" firstStartedPulling="2026-02-28 09:37:57.26746065 +0000 UTC m=+2240.958263461" lastFinishedPulling="2026-02-28 09:37:59.654438386 +0000 UTC m=+2243.345241217" observedRunningTime="2026-02-28 09:38:00.323312041 +0000 UTC m=+2244.014114872" watchObservedRunningTime="2026-02-28 09:38:00.329068531 +0000 UTC m=+2244.019871352" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.350213 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26h4h\" (UniqueName: \"kubernetes.io/projected/dd6fd48a-ad72-48f5-8bb1-06804de608c2-kube-api-access-26h4h\") pod \"auto-csr-approver-29537858-wf7m7\" (UID: \"dd6fd48a-ad72-48f5-8bb1-06804de608c2\") " pod="openshift-infra/auto-csr-approver-29537858-wf7m7" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.477401 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" Feb 28 09:38:00 crc kubenswrapper[4996]: I0228 09:38:00.953928 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537858-wf7m7"] Feb 28 09:38:01 crc kubenswrapper[4996]: I0228 09:38:01.327853 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94j57" event={"ID":"8d11aa77-e726-43e4-bb35-66a3f93adc85","Type":"ContainerStarted","Data":"e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335"} Feb 28 09:38:01 crc kubenswrapper[4996]: I0228 09:38:01.329114 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" event={"ID":"dd6fd48a-ad72-48f5-8bb1-06804de608c2","Type":"ContainerStarted","Data":"182582231e4823235dc61047cd0cf6917bac21630cf8e545704ff3028de09d40"} Feb 28 09:38:02 crc kubenswrapper[4996]: I0228 09:38:02.337788 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" event={"ID":"dd6fd48a-ad72-48f5-8bb1-06804de608c2","Type":"ContainerStarted","Data":"9c63488b0eba73748570b05be2ce120ee56ef56fe9ed55859d8bcd9fa4925677"} Feb 28 09:38:02 crc kubenswrapper[4996]: I0228 09:38:02.355702 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" podStartSLOduration=1.395471619 podStartE2EDuration="2.355684563s" podCreationTimestamp="2026-02-28 09:38:00 +0000 UTC" firstStartedPulling="2026-02-28 09:38:00.956236106 +0000 UTC m=+2244.647038917" lastFinishedPulling="2026-02-28 09:38:01.91644904 +0000 UTC m=+2245.607251861" observedRunningTime="2026-02-28 09:38:02.351182612 +0000 UTC m=+2246.041985443" watchObservedRunningTime="2026-02-28 09:38:02.355684563 +0000 UTC m=+2246.046487364" Feb 28 09:38:02 crc kubenswrapper[4996]: I0228 09:38:02.357139 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-94j57" podStartSLOduration=2.882621046 podStartE2EDuration="4.357131968s" podCreationTimestamp="2026-02-28 09:37:58 +0000 UTC" firstStartedPulling="2026-02-28 09:37:59.289205881 +0000 UTC m=+2242.980008702" lastFinishedPulling="2026-02-28 09:38:00.763716813 +0000 UTC m=+2244.454519624" observedRunningTime="2026-02-28 09:38:01.347282922 +0000 UTC m=+2245.038085723" watchObservedRunningTime="2026-02-28 09:38:02.357131968 +0000 UTC m=+2246.047934779" Feb 28 09:38:03 crc kubenswrapper[4996]: I0228 09:38:03.348889 4996 generic.go:334] "Generic (PLEG): container finished" podID="dd6fd48a-ad72-48f5-8bb1-06804de608c2" containerID="9c63488b0eba73748570b05be2ce120ee56ef56fe9ed55859d8bcd9fa4925677" exitCode=0 Feb 28 09:38:03 crc kubenswrapper[4996]: I0228 09:38:03.348926 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" event={"ID":"dd6fd48a-ad72-48f5-8bb1-06804de608c2","Type":"ContainerDied","Data":"9c63488b0eba73748570b05be2ce120ee56ef56fe9ed55859d8bcd9fa4925677"} Feb 28 09:38:04 crc kubenswrapper[4996]: I0228 09:38:04.665070 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" Feb 28 09:38:04 crc kubenswrapper[4996]: I0228 09:38:04.806452 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26h4h\" (UniqueName: \"kubernetes.io/projected/dd6fd48a-ad72-48f5-8bb1-06804de608c2-kube-api-access-26h4h\") pod \"dd6fd48a-ad72-48f5-8bb1-06804de608c2\" (UID: \"dd6fd48a-ad72-48f5-8bb1-06804de608c2\") " Feb 28 09:38:04 crc kubenswrapper[4996]: I0228 09:38:04.814265 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd6fd48a-ad72-48f5-8bb1-06804de608c2-kube-api-access-26h4h" (OuterVolumeSpecName: "kube-api-access-26h4h") pod "dd6fd48a-ad72-48f5-8bb1-06804de608c2" (UID: "dd6fd48a-ad72-48f5-8bb1-06804de608c2"). InnerVolumeSpecName "kube-api-access-26h4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:38:04 crc kubenswrapper[4996]: I0228 09:38:04.908015 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26h4h\" (UniqueName: \"kubernetes.io/projected/dd6fd48a-ad72-48f5-8bb1-06804de608c2-kube-api-access-26h4h\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:05 crc kubenswrapper[4996]: I0228 09:38:05.364775 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" event={"ID":"dd6fd48a-ad72-48f5-8bb1-06804de608c2","Type":"ContainerDied","Data":"182582231e4823235dc61047cd0cf6917bac21630cf8e545704ff3028de09d40"} Feb 28 09:38:05 crc kubenswrapper[4996]: I0228 09:38:05.364815 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="182582231e4823235dc61047cd0cf6917bac21630cf8e545704ff3028de09d40" Feb 28 09:38:05 crc kubenswrapper[4996]: I0228 09:38:05.364866 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537858-wf7m7" Feb 28 09:38:05 crc kubenswrapper[4996]: I0228 09:38:05.424559 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537852-6fhtm"] Feb 28 09:38:05 crc kubenswrapper[4996]: I0228 09:38:05.432153 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537852-6fhtm"] Feb 28 09:38:05 crc kubenswrapper[4996]: I0228 09:38:05.657626 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:38:05 crc kubenswrapper[4996]: I0228 09:38:05.657675 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:38:05 crc kubenswrapper[4996]: I0228 09:38:05.699620 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:38:06 crc kubenswrapper[4996]: I0228 09:38:06.443075 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:38:06 crc kubenswrapper[4996]: I0228 09:38:06.719060 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gpjn"] Feb 28 09:38:07 crc kubenswrapper[4996]: I0228 09:38:07.049727 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40fe270d-4b8b-416d-96e0-f863fcd0d969" path="/var/lib/kubelet/pods/40fe270d-4b8b-416d-96e0-f863fcd0d969/volumes" Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.391488 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8gpjn" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerName="registry-server" containerID="cri-o://10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc" gracePeriod=2 Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.469067 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.469135 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.541489 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.889404 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.992575 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-utilities\") pod \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.992645 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgbs7\" (UniqueName: \"kubernetes.io/projected/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-kube-api-access-tgbs7\") pod \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.992726 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-catalog-content\") pod \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\" (UID: \"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe\") " Feb 28 09:38:08 crc kubenswrapper[4996]: I0228 09:38:08.993590 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-utilities" (OuterVolumeSpecName: "utilities") pod "2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" (UID: "2eac17eb-64d6-4da2-b21f-f96d0a1afbfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.000440 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-kube-api-access-tgbs7" (OuterVolumeSpecName: "kube-api-access-tgbs7") pod "2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" (UID: "2eac17eb-64d6-4da2-b21f-f96d0a1afbfe"). InnerVolumeSpecName "kube-api-access-tgbs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.044894 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" (UID: "2eac17eb-64d6-4da2-b21f-f96d0a1afbfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.095243 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.095312 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.095337 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgbs7\" (UniqueName: \"kubernetes.io/projected/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe-kube-api-access-tgbs7\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.402698 4996 generic.go:334] "Generic (PLEG): container finished" podID="d0a3e2dd-04e0-4625-b69c-6fddf875deeb" containerID="975bf171fbdd714c7a0c8aaeebc6748b7c5a1c81fbb04940fd72b4062d178524" exitCode=0 Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.402757 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" event={"ID":"d0a3e2dd-04e0-4625-b69c-6fddf875deeb","Type":"ContainerDied","Data":"975bf171fbdd714c7a0c8aaeebc6748b7c5a1c81fbb04940fd72b4062d178524"} Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.405447 4996 generic.go:334] "Generic (PLEG): container finished" podID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerID="10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc" exitCode=0 Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.405535 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gpjn" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.405553 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gpjn" event={"ID":"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe","Type":"ContainerDied","Data":"10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc"} Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.405603 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gpjn" event={"ID":"2eac17eb-64d6-4da2-b21f-f96d0a1afbfe","Type":"ContainerDied","Data":"07dd6a1a693dc8131f7e96e9b30c38bbfa5f8a2126ddab32803590a566f6c646"} Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.405639 4996 scope.go:117] "RemoveContainer" containerID="10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.450362 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gpjn"] Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.454187 4996 scope.go:117] "RemoveContainer" containerID="d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.458231 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8gpjn"] Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.478154 4996 scope.go:117] "RemoveContainer" containerID="715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.483684 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.535375 4996 scope.go:117] "RemoveContainer" containerID="10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc" Feb 28 09:38:09 crc kubenswrapper[4996]: E0228 09:38:09.536093 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc\": container with ID starting with 10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc not found: ID does not exist" containerID="10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.536133 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc"} err="failed to get container status \"10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc\": rpc error: code = NotFound desc = could not find container \"10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc\": container with ID starting with 10748a9cedb4eee9e20ea284a6313cd1767f7b0b47adbef9037a55c69ea74abc not found: ID does not exist" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.536154 4996 scope.go:117] "RemoveContainer" containerID="d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5" Feb 28 09:38:09 crc kubenswrapper[4996]: E0228 09:38:09.536515 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5\": container with ID starting with d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5 not found: ID does not exist" containerID="d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.536541 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5"} err="failed to get container status \"d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5\": rpc error: code = NotFound desc = could not find container \"d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5\": container with ID starting with d009954529339632ed20b581863c16cbde7a978ede7c2d8699add23add96f5a5 not found: ID does not exist" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.536555 4996 scope.go:117] "RemoveContainer" containerID="715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5" Feb 28 09:38:09 crc kubenswrapper[4996]: E0228 09:38:09.536764 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5\": container with ID starting with 715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5 not found: ID does not exist" containerID="715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5" Feb 28 09:38:09 crc kubenswrapper[4996]: I0228 09:38:09.536792 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5"} err="failed to get container status \"715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5\": rpc error: code = NotFound desc = could not find container \"715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5\": container with ID starting with 715b8730897c0c18fbde1ad4745fcaee3b19193f3a93cf527e2ab7cedd4cd6e5 not found: ID does not exist" Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.830423 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.925775 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-bootstrap-combined-ca-bundle\") pod \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.926091 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ssh-key-openstack-edpm-ipam\") pod \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.926311 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ceph\") pod \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.926543 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-inventory\") pod \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.926686 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjdw8\" (UniqueName: \"kubernetes.io/projected/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-kube-api-access-xjdw8\") pod \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\" (UID: \"d0a3e2dd-04e0-4625-b69c-6fddf875deeb\") " Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.931599 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d0a3e2dd-04e0-4625-b69c-6fddf875deeb" (UID: "d0a3e2dd-04e0-4625-b69c-6fddf875deeb"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.931627 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ceph" (OuterVolumeSpecName: "ceph") pod "d0a3e2dd-04e0-4625-b69c-6fddf875deeb" (UID: "d0a3e2dd-04e0-4625-b69c-6fddf875deeb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.937220 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-kube-api-access-xjdw8" (OuterVolumeSpecName: "kube-api-access-xjdw8") pod "d0a3e2dd-04e0-4625-b69c-6fddf875deeb" (UID: "d0a3e2dd-04e0-4625-b69c-6fddf875deeb"). InnerVolumeSpecName "kube-api-access-xjdw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.955210 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d0a3e2dd-04e0-4625-b69c-6fddf875deeb" (UID: "d0a3e2dd-04e0-4625-b69c-6fddf875deeb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:10 crc kubenswrapper[4996]: I0228 09:38:10.959064 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-inventory" (OuterVolumeSpecName: "inventory") pod "d0a3e2dd-04e0-4625-b69c-6fddf875deeb" (UID: "d0a3e2dd-04e0-4625-b69c-6fddf875deeb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.029060 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.029266 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjdw8\" (UniqueName: \"kubernetes.io/projected/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-kube-api-access-xjdw8\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.029365 4996 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.029541 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.029623 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d0a3e2dd-04e0-4625-b69c-6fddf875deeb-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.056679 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" path="/var/lib/kubelet/pods/2eac17eb-64d6-4da2-b21f-f96d0a1afbfe/volumes" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.421940 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" event={"ID":"d0a3e2dd-04e0-4625-b69c-6fddf875deeb","Type":"ContainerDied","Data":"0946d7953ba46b99c8110d4afd70c21dc1fbea8a4a63941ddd36b16a9f544537"} Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.422289 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0946d7953ba46b99c8110d4afd70c21dc1fbea8a4a63941ddd36b16a9f544537" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.422026 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.515134 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl"] Feb 28 09:38:11 crc kubenswrapper[4996]: E0228 09:38:11.515841 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0a3e2dd-04e0-4625-b69c-6fddf875deeb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.515873 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0a3e2dd-04e0-4625-b69c-6fddf875deeb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:11 crc kubenswrapper[4996]: E0228 09:38:11.515959 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerName="extract-content" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.515977 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerName="extract-content" Feb 28 09:38:11 crc kubenswrapper[4996]: E0228 09:38:11.516040 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerName="registry-server" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.516057 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerName="registry-server" Feb 28 09:38:11 crc kubenswrapper[4996]: E0228 09:38:11.516075 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd6fd48a-ad72-48f5-8bb1-06804de608c2" containerName="oc" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.516094 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd6fd48a-ad72-48f5-8bb1-06804de608c2" containerName="oc" Feb 28 09:38:11 crc kubenswrapper[4996]: E0228 09:38:11.516149 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerName="extract-utilities" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.516166 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerName="extract-utilities" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.516557 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd6fd48a-ad72-48f5-8bb1-06804de608c2" containerName="oc" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.516602 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0a3e2dd-04e0-4625-b69c-6fddf875deeb" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.516652 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="2eac17eb-64d6-4da2-b21f-f96d0a1afbfe" containerName="registry-server" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.517607 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.523104 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.523327 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.523514 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.523681 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.523745 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.524278 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl"] Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.641290 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.641414 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bhl\" (UniqueName: \"kubernetes.io/projected/a8f28eab-0652-46d4-817f-9b48a6f71e4a-kube-api-access-s2bhl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.641453 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.641479 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.743202 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.743354 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bhl\" (UniqueName: \"kubernetes.io/projected/a8f28eab-0652-46d4-817f-9b48a6f71e4a-kube-api-access-s2bhl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.743409 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.743486 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.748202 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.752857 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.753057 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.766857 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bhl\" (UniqueName: \"kubernetes.io/projected/a8f28eab-0652-46d4-817f-9b48a6f71e4a-kube-api-access-s2bhl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-7qknl\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.833555 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.913836 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94j57"] Feb 28 09:38:11 crc kubenswrapper[4996]: I0228 09:38:11.914135 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-94j57" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerName="registry-server" containerID="cri-o://e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335" gracePeriod=2 Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.248976 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.249280 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.350062 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.422277 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl"] Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.432759 4996 generic.go:334] "Generic (PLEG): container finished" podID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerID="e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335" exitCode=0 Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.432801 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94j57" event={"ID":"8d11aa77-e726-43e4-bb35-66a3f93adc85","Type":"ContainerDied","Data":"e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335"} Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.432832 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-94j57" event={"ID":"8d11aa77-e726-43e4-bb35-66a3f93adc85","Type":"ContainerDied","Data":"f53b457bec009d5e954498ba5237ce1f186809291f9c2ca45c369fb5e9e7f768"} Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.432855 4996 scope.go:117] "RemoveContainer" containerID="e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.432984 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-94j57" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.506548 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-catalog-content\") pod \"8d11aa77-e726-43e4-bb35-66a3f93adc85\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.506993 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-utilities\") pod \"8d11aa77-e726-43e4-bb35-66a3f93adc85\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.507159 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnm5q\" (UniqueName: \"kubernetes.io/projected/8d11aa77-e726-43e4-bb35-66a3f93adc85-kube-api-access-vnm5q\") pod \"8d11aa77-e726-43e4-bb35-66a3f93adc85\" (UID: \"8d11aa77-e726-43e4-bb35-66a3f93adc85\") " Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.508567 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-utilities" (OuterVolumeSpecName: "utilities") pod "8d11aa77-e726-43e4-bb35-66a3f93adc85" (UID: "8d11aa77-e726-43e4-bb35-66a3f93adc85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.516012 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d11aa77-e726-43e4-bb35-66a3f93adc85-kube-api-access-vnm5q" (OuterVolumeSpecName: "kube-api-access-vnm5q") pod "8d11aa77-e726-43e4-bb35-66a3f93adc85" (UID: "8d11aa77-e726-43e4-bb35-66a3f93adc85"). InnerVolumeSpecName "kube-api-access-vnm5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.531608 4996 scope.go:117] "RemoveContainer" containerID="2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.543325 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d11aa77-e726-43e4-bb35-66a3f93adc85" (UID: "8d11aa77-e726-43e4-bb35-66a3f93adc85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.559900 4996 scope.go:117] "RemoveContainer" containerID="2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.579747 4996 scope.go:117] "RemoveContainer" containerID="e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335" Feb 28 09:38:12 crc kubenswrapper[4996]: E0228 09:38:12.580225 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335\": container with ID starting with e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335 not found: ID does not exist" containerID="e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.580257 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335"} err="failed to get container status \"e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335\": rpc error: code = NotFound desc = could not find container \"e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335\": container with ID starting with e3f7c78bdd9c92f24a2feba9d8157629d87d8fc1d298a07339a07a18f59e6335 not found: ID does not exist" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.580276 4996 scope.go:117] "RemoveContainer" containerID="2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004" Feb 28 09:38:12 crc kubenswrapper[4996]: E0228 09:38:12.580609 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004\": container with ID starting with 2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004 not found: ID does not exist" containerID="2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.580657 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004"} err="failed to get container status \"2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004\": rpc error: code = NotFound desc = could not find container \"2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004\": container with ID starting with 2383770f44f5a1d1df3ae40c3555564f393033427a592d32d7c63ef600dd8004 not found: ID does not exist" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.580678 4996 scope.go:117] "RemoveContainer" containerID="2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0" Feb 28 09:38:12 crc kubenswrapper[4996]: E0228 09:38:12.580938 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0\": container with ID starting with 2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0 not found: ID does not exist" containerID="2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.580961 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0"} err="failed to get container status \"2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0\": rpc error: code = NotFound desc = could not find container \"2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0\": container with ID starting with 2bdb520bcb43123494f8d603c98871d47da55bfdd5e0be392cea1bd1afd99ad0 not found: ID does not exist" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.610112 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.610149 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d11aa77-e726-43e4-bb35-66a3f93adc85-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.610162 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnm5q\" (UniqueName: \"kubernetes.io/projected/8d11aa77-e726-43e4-bb35-66a3f93adc85-kube-api-access-vnm5q\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.764661 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-94j57"] Feb 28 09:38:12 crc kubenswrapper[4996]: I0228 09:38:12.771272 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-94j57"] Feb 28 09:38:13 crc kubenswrapper[4996]: I0228 09:38:13.042332 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" path="/var/lib/kubelet/pods/8d11aa77-e726-43e4-bb35-66a3f93adc85/volumes" Feb 28 09:38:13 crc kubenswrapper[4996]: I0228 09:38:13.444498 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" event={"ID":"a8f28eab-0652-46d4-817f-9b48a6f71e4a","Type":"ContainerStarted","Data":"5e449013e7eec76cce03827bbc3c68c4cf1748cc4ce3bbdee44b975b71fa5442"} Feb 28 09:38:13 crc kubenswrapper[4996]: I0228 09:38:13.444899 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" event={"ID":"a8f28eab-0652-46d4-817f-9b48a6f71e4a","Type":"ContainerStarted","Data":"840245dcf2475c633dc7ce8e50a17871be36c292e83547c44ea8940030e2486a"} Feb 28 09:38:37 crc kubenswrapper[4996]: I0228 09:38:37.672371 4996 generic.go:334] "Generic (PLEG): container finished" podID="a8f28eab-0652-46d4-817f-9b48a6f71e4a" containerID="5e449013e7eec76cce03827bbc3c68c4cf1748cc4ce3bbdee44b975b71fa5442" exitCode=0 Feb 28 09:38:37 crc kubenswrapper[4996]: I0228 09:38:37.672458 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" event={"ID":"a8f28eab-0652-46d4-817f-9b48a6f71e4a","Type":"ContainerDied","Data":"5e449013e7eec76cce03827bbc3c68c4cf1748cc4ce3bbdee44b975b71fa5442"} Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.118794 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.256250 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-inventory\") pod \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.256323 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ceph\") pod \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.256431 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2bhl\" (UniqueName: \"kubernetes.io/projected/a8f28eab-0652-46d4-817f-9b48a6f71e4a-kube-api-access-s2bhl\") pod \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.256490 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ssh-key-openstack-edpm-ipam\") pod \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\" (UID: \"a8f28eab-0652-46d4-817f-9b48a6f71e4a\") " Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.264246 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ceph" (OuterVolumeSpecName: "ceph") pod "a8f28eab-0652-46d4-817f-9b48a6f71e4a" (UID: "a8f28eab-0652-46d4-817f-9b48a6f71e4a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.265871 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8f28eab-0652-46d4-817f-9b48a6f71e4a-kube-api-access-s2bhl" (OuterVolumeSpecName: "kube-api-access-s2bhl") pod "a8f28eab-0652-46d4-817f-9b48a6f71e4a" (UID: "a8f28eab-0652-46d4-817f-9b48a6f71e4a"). InnerVolumeSpecName "kube-api-access-s2bhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.290048 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-inventory" (OuterVolumeSpecName: "inventory") pod "a8f28eab-0652-46d4-817f-9b48a6f71e4a" (UID: "a8f28eab-0652-46d4-817f-9b48a6f71e4a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.291263 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a8f28eab-0652-46d4-817f-9b48a6f71e4a" (UID: "a8f28eab-0652-46d4-817f-9b48a6f71e4a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.359088 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.359136 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.359154 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2bhl\" (UniqueName: \"kubernetes.io/projected/a8f28eab-0652-46d4-817f-9b48a6f71e4a-kube-api-access-s2bhl\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.359175 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a8f28eab-0652-46d4-817f-9b48a6f71e4a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.691569 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" event={"ID":"a8f28eab-0652-46d4-817f-9b48a6f71e4a","Type":"ContainerDied","Data":"840245dcf2475c633dc7ce8e50a17871be36c292e83547c44ea8940030e2486a"} Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.691612 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="840245dcf2475c633dc7ce8e50a17871be36c292e83547c44ea8940030e2486a" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.691640 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-7qknl" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.770001 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr"] Feb 28 09:38:39 crc kubenswrapper[4996]: E0228 09:38:39.770739 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8f28eab-0652-46d4-817f-9b48a6f71e4a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.770820 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8f28eab-0652-46d4-817f-9b48a6f71e4a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:39 crc kubenswrapper[4996]: E0228 09:38:39.770895 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerName="extract-content" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.770953 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerName="extract-content" Feb 28 09:38:39 crc kubenswrapper[4996]: E0228 09:38:39.771034 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerName="extract-utilities" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.771102 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerName="extract-utilities" Feb 28 09:38:39 crc kubenswrapper[4996]: E0228 09:38:39.771170 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerName="registry-server" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.771227 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerName="registry-server" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.771425 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d11aa77-e726-43e4-bb35-66a3f93adc85" containerName="registry-server" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.771500 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8f28eab-0652-46d4-817f-9b48a6f71e4a" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.772148 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.774752 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.774817 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.774988 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.775397 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.781810 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.786377 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr"] Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.867099 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.867235 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvr4v\" (UniqueName: \"kubernetes.io/projected/4b3851de-b4b3-497e-9b3d-d56d55e05792-kube-api-access-pvr4v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.867274 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.867317 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.968923 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.969067 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvr4v\" (UniqueName: \"kubernetes.io/projected/4b3851de-b4b3-497e-9b3d-d56d55e05792-kube-api-access-pvr4v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.969114 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.969335 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.975221 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.975253 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.976083 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:39 crc kubenswrapper[4996]: I0228 09:38:39.995324 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvr4v\" (UniqueName: \"kubernetes.io/projected/4b3851de-b4b3-497e-9b3d-d56d55e05792-kube-api-access-pvr4v\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-6sltr\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:40 crc kubenswrapper[4996]: I0228 09:38:40.097858 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:40 crc kubenswrapper[4996]: I0228 09:38:40.734281 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr"] Feb 28 09:38:41 crc kubenswrapper[4996]: I0228 09:38:41.710326 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" event={"ID":"4b3851de-b4b3-497e-9b3d-d56d55e05792","Type":"ContainerStarted","Data":"98dfb61c99c65b1f25c954b6ecb297300a73da375b6c93efd9826679154e1297"} Feb 28 09:38:41 crc kubenswrapper[4996]: I0228 09:38:41.710663 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" event={"ID":"4b3851de-b4b3-497e-9b3d-d56d55e05792","Type":"ContainerStarted","Data":"b30f70d6f7807ae06f55647d423c7322fb21b8f60cfd97568250ff7d60f89225"} Feb 28 09:38:41 crc kubenswrapper[4996]: I0228 09:38:41.733780 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" podStartSLOduration=2.317973471 podStartE2EDuration="2.733757714s" podCreationTimestamp="2026-02-28 09:38:39 +0000 UTC" firstStartedPulling="2026-02-28 09:38:40.743407087 +0000 UTC m=+2284.434209898" lastFinishedPulling="2026-02-28 09:38:41.15919131 +0000 UTC m=+2284.849994141" observedRunningTime="2026-02-28 09:38:41.727247975 +0000 UTC m=+2285.418050806" watchObservedRunningTime="2026-02-28 09:38:41.733757714 +0000 UTC m=+2285.424560535" Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.248858 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.251134 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.251455 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.252951 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.253378 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" gracePeriod=600 Feb 28 09:38:42 crc kubenswrapper[4996]: E0228 09:38:42.380165 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.721529 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" exitCode=0 Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.721606 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee"} Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.721716 4996 scope.go:117] "RemoveContainer" containerID="d3f7c29ff6876349fc77935eade8f1cab613d1ff9df0a3d784466d07cdf7529f" Feb 28 09:38:42 crc kubenswrapper[4996]: I0228 09:38:42.722732 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:38:42 crc kubenswrapper[4996]: E0228 09:38:42.723165 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:38:44 crc kubenswrapper[4996]: I0228 09:38:44.701513 4996 scope.go:117] "RemoveContainer" containerID="539b0a91c44ec04986643c1273cee1d416bbb96b8703cc7c4f33072a87a8b2e3" Feb 28 09:38:44 crc kubenswrapper[4996]: I0228 09:38:44.764633 4996 scope.go:117] "RemoveContainer" containerID="b4f67f78313a9ca71b2bb4cda1945a33166965c1445002e8ccafd497e1b73a8d" Feb 28 09:38:46 crc kubenswrapper[4996]: I0228 09:38:46.770055 4996 generic.go:334] "Generic (PLEG): container finished" podID="4b3851de-b4b3-497e-9b3d-d56d55e05792" containerID="98dfb61c99c65b1f25c954b6ecb297300a73da375b6c93efd9826679154e1297" exitCode=0 Feb 28 09:38:46 crc kubenswrapper[4996]: I0228 09:38:46.770436 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" event={"ID":"4b3851de-b4b3-497e-9b3d-d56d55e05792","Type":"ContainerDied","Data":"98dfb61c99c65b1f25c954b6ecb297300a73da375b6c93efd9826679154e1297"} Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.221175 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.349211 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ssh-key-openstack-edpm-ipam\") pod \"4b3851de-b4b3-497e-9b3d-d56d55e05792\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.349284 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvr4v\" (UniqueName: \"kubernetes.io/projected/4b3851de-b4b3-497e-9b3d-d56d55e05792-kube-api-access-pvr4v\") pod \"4b3851de-b4b3-497e-9b3d-d56d55e05792\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.349313 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-inventory\") pod \"4b3851de-b4b3-497e-9b3d-d56d55e05792\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.349381 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ceph\") pod \"4b3851de-b4b3-497e-9b3d-d56d55e05792\" (UID: \"4b3851de-b4b3-497e-9b3d-d56d55e05792\") " Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.355737 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ceph" (OuterVolumeSpecName: "ceph") pod "4b3851de-b4b3-497e-9b3d-d56d55e05792" (UID: "4b3851de-b4b3-497e-9b3d-d56d55e05792"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.356055 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b3851de-b4b3-497e-9b3d-d56d55e05792-kube-api-access-pvr4v" (OuterVolumeSpecName: "kube-api-access-pvr4v") pod "4b3851de-b4b3-497e-9b3d-d56d55e05792" (UID: "4b3851de-b4b3-497e-9b3d-d56d55e05792"). InnerVolumeSpecName "kube-api-access-pvr4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.374298 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-inventory" (OuterVolumeSpecName: "inventory") pod "4b3851de-b4b3-497e-9b3d-d56d55e05792" (UID: "4b3851de-b4b3-497e-9b3d-d56d55e05792"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.379517 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4b3851de-b4b3-497e-9b3d-d56d55e05792" (UID: "4b3851de-b4b3-497e-9b3d-d56d55e05792"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.452220 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.452269 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.452289 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4b3851de-b4b3-497e-9b3d-d56d55e05792-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.452311 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvr4v\" (UniqueName: \"kubernetes.io/projected/4b3851de-b4b3-497e-9b3d-d56d55e05792-kube-api-access-pvr4v\") on node \"crc\" DevicePath \"\"" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.790145 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" event={"ID":"4b3851de-b4b3-497e-9b3d-d56d55e05792","Type":"ContainerDied","Data":"b30f70d6f7807ae06f55647d423c7322fb21b8f60cfd97568250ff7d60f89225"} Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.790194 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b30f70d6f7807ae06f55647d423c7322fb21b8f60cfd97568250ff7d60f89225" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.790285 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-6sltr" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.946940 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr"] Feb 28 09:38:48 crc kubenswrapper[4996]: E0228 09:38:48.947627 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b3851de-b4b3-497e-9b3d-d56d55e05792" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.947718 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b3851de-b4b3-497e-9b3d-d56d55e05792" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.947990 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b3851de-b4b3-497e-9b3d-d56d55e05792" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.948892 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.951250 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.951450 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.951718 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.951874 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.952029 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:38:48 crc kubenswrapper[4996]: I0228 09:38:48.964874 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr"] Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.062141 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.062227 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.062251 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzfz\" (UniqueName: \"kubernetes.io/projected/500ae8f8-17b1-45fb-9569-d49fd19cdea6-kube-api-access-hgzfz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.062271 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.163705 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.163747 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgzfz\" (UniqueName: \"kubernetes.io/projected/500ae8f8-17b1-45fb-9569-d49fd19cdea6-kube-api-access-hgzfz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.163776 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.163874 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.169352 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.170424 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.174264 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.180763 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgzfz\" (UniqueName: \"kubernetes.io/projected/500ae8f8-17b1-45fb-9569-d49fd19cdea6-kube-api-access-hgzfz\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ghlpr\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.274638 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:38:49 crc kubenswrapper[4996]: I0228 09:38:49.789453 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr"] Feb 28 09:38:50 crc kubenswrapper[4996]: I0228 09:38:50.820042 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" event={"ID":"500ae8f8-17b1-45fb-9569-d49fd19cdea6","Type":"ContainerStarted","Data":"a5ca263c517ecf276e345e8730101b80277e6a48b9210efcd288879fe214c773"} Feb 28 09:38:50 crc kubenswrapper[4996]: I0228 09:38:50.820465 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" event={"ID":"500ae8f8-17b1-45fb-9569-d49fd19cdea6","Type":"ContainerStarted","Data":"d6018a773e280477447369ca19e12a0b4f56f0ca8706096ee57caf4353779988"} Feb 28 09:38:50 crc kubenswrapper[4996]: I0228 09:38:50.846676 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" podStartSLOduration=2.311352923 podStartE2EDuration="2.846655477s" podCreationTimestamp="2026-02-28 09:38:48 +0000 UTC" firstStartedPulling="2026-02-28 09:38:49.795348969 +0000 UTC m=+2293.486151790" lastFinishedPulling="2026-02-28 09:38:50.330651533 +0000 UTC m=+2294.021454344" observedRunningTime="2026-02-28 09:38:50.842523696 +0000 UTC m=+2294.533326507" watchObservedRunningTime="2026-02-28 09:38:50.846655477 +0000 UTC m=+2294.537458298" Feb 28 09:38:54 crc kubenswrapper[4996]: I0228 09:38:54.035429 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:38:54 crc kubenswrapper[4996]: E0228 09:38:54.036509 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:39:06 crc kubenswrapper[4996]: I0228 09:39:06.033706 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:39:06 crc kubenswrapper[4996]: E0228 09:39:06.034562 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:39:19 crc kubenswrapper[4996]: I0228 09:39:19.033851 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:39:19 crc kubenswrapper[4996]: E0228 09:39:19.034570 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:39:26 crc kubenswrapper[4996]: I0228 09:39:26.144116 4996 generic.go:334] "Generic (PLEG): container finished" podID="500ae8f8-17b1-45fb-9569-d49fd19cdea6" containerID="a5ca263c517ecf276e345e8730101b80277e6a48b9210efcd288879fe214c773" exitCode=0 Feb 28 09:39:26 crc kubenswrapper[4996]: I0228 09:39:26.144188 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" event={"ID":"500ae8f8-17b1-45fb-9569-d49fd19cdea6","Type":"ContainerDied","Data":"a5ca263c517ecf276e345e8730101b80277e6a48b9210efcd288879fe214c773"} Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.538320 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.736586 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-inventory\") pod \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.736807 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ceph\") pod \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.737060 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgzfz\" (UniqueName: \"kubernetes.io/projected/500ae8f8-17b1-45fb-9569-d49fd19cdea6-kube-api-access-hgzfz\") pod \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.737113 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ssh-key-openstack-edpm-ipam\") pod \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\" (UID: \"500ae8f8-17b1-45fb-9569-d49fd19cdea6\") " Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.742850 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ceph" (OuterVolumeSpecName: "ceph") pod "500ae8f8-17b1-45fb-9569-d49fd19cdea6" (UID: "500ae8f8-17b1-45fb-9569-d49fd19cdea6"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.752738 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500ae8f8-17b1-45fb-9569-d49fd19cdea6-kube-api-access-hgzfz" (OuterVolumeSpecName: "kube-api-access-hgzfz") pod "500ae8f8-17b1-45fb-9569-d49fd19cdea6" (UID: "500ae8f8-17b1-45fb-9569-d49fd19cdea6"). InnerVolumeSpecName "kube-api-access-hgzfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.763503 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "500ae8f8-17b1-45fb-9569-d49fd19cdea6" (UID: "500ae8f8-17b1-45fb-9569-d49fd19cdea6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.764187 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-inventory" (OuterVolumeSpecName: "inventory") pod "500ae8f8-17b1-45fb-9569-d49fd19cdea6" (UID: "500ae8f8-17b1-45fb-9569-d49fd19cdea6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.841237 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.841615 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.841630 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgzfz\" (UniqueName: \"kubernetes.io/projected/500ae8f8-17b1-45fb-9569-d49fd19cdea6-kube-api-access-hgzfz\") on node \"crc\" DevicePath \"\"" Feb 28 09:39:27 crc kubenswrapper[4996]: I0228 09:39:27.841646 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/500ae8f8-17b1-45fb-9569-d49fd19cdea6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.164864 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" event={"ID":"500ae8f8-17b1-45fb-9569-d49fd19cdea6","Type":"ContainerDied","Data":"d6018a773e280477447369ca19e12a0b4f56f0ca8706096ee57caf4353779988"} Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.164918 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ghlpr" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.164923 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6018a773e280477447369ca19e12a0b4f56f0ca8706096ee57caf4353779988" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.322455 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9"] Feb 28 09:39:28 crc kubenswrapper[4996]: E0228 09:39:28.323086 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500ae8f8-17b1-45fb-9569-d49fd19cdea6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.323118 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="500ae8f8-17b1-45fb-9569-d49fd19cdea6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.323476 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="500ae8f8-17b1-45fb-9569-d49fd19cdea6" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.324562 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.333535 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.333817 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.334210 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.334446 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.334829 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.338236 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9"] Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.352519 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9jh\" (UniqueName: \"kubernetes.io/projected/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-kube-api-access-6f9jh\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.352737 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.352796 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.352873 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.458552 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9jh\" (UniqueName: \"kubernetes.io/projected/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-kube-api-access-6f9jh\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.458702 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.458745 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.458807 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.465846 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.466546 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.478940 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.489359 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9jh\" (UniqueName: \"kubernetes.io/projected/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-kube-api-access-6f9jh\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:28 crc kubenswrapper[4996]: I0228 09:39:28.652853 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:29 crc kubenswrapper[4996]: I0228 09:39:29.164083 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9"] Feb 28 09:39:30 crc kubenswrapper[4996]: I0228 09:39:30.033664 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:39:30 crc kubenswrapper[4996]: E0228 09:39:30.034498 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:39:30 crc kubenswrapper[4996]: I0228 09:39:30.183877 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" event={"ID":"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd","Type":"ContainerStarted","Data":"45c1681dcb1a82e380855b92093897865fce2db86d611868a0ca618643ef223f"} Feb 28 09:39:30 crc kubenswrapper[4996]: I0228 09:39:30.183934 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" event={"ID":"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd","Type":"ContainerStarted","Data":"3fc2677d17d451e23f46a4d0a967da643d410a02a2de4dfeded58b33e5d9b04d"} Feb 28 09:39:30 crc kubenswrapper[4996]: I0228 09:39:30.205575 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" podStartSLOduration=1.766369603 podStartE2EDuration="2.205552349s" podCreationTimestamp="2026-02-28 09:39:28 +0000 UTC" firstStartedPulling="2026-02-28 09:39:29.176440684 +0000 UTC m=+2332.867243495" lastFinishedPulling="2026-02-28 09:39:29.61562343 +0000 UTC m=+2333.306426241" observedRunningTime="2026-02-28 09:39:30.203344266 +0000 UTC m=+2333.894147087" watchObservedRunningTime="2026-02-28 09:39:30.205552349 +0000 UTC m=+2333.896355170" Feb 28 09:39:34 crc kubenswrapper[4996]: I0228 09:39:34.217580 4996 generic.go:334] "Generic (PLEG): container finished" podID="4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd" containerID="45c1681dcb1a82e380855b92093897865fce2db86d611868a0ca618643ef223f" exitCode=0 Feb 28 09:39:34 crc kubenswrapper[4996]: I0228 09:39:34.217650 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" event={"ID":"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd","Type":"ContainerDied","Data":"45c1681dcb1a82e380855b92093897865fce2db86d611868a0ca618643ef223f"} Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.683609 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.805136 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-inventory\") pod \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.805230 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ssh-key-openstack-edpm-ipam\") pod \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.805359 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f9jh\" (UniqueName: \"kubernetes.io/projected/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-kube-api-access-6f9jh\") pod \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.805423 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ceph\") pod \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\" (UID: \"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd\") " Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.813427 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-kube-api-access-6f9jh" (OuterVolumeSpecName: "kube-api-access-6f9jh") pod "4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd" (UID: "4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd"). InnerVolumeSpecName "kube-api-access-6f9jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.814193 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ceph" (OuterVolumeSpecName: "ceph") pod "4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd" (UID: "4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.847406 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-inventory" (OuterVolumeSpecName: "inventory") pod "4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd" (UID: "4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.849055 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd" (UID: "4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.914982 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.915025 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.915036 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f9jh\" (UniqueName: \"kubernetes.io/projected/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-kube-api-access-6f9jh\") on node \"crc\" DevicePath \"\"" Feb 28 09:39:35 crc kubenswrapper[4996]: I0228 09:39:35.915044 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.243815 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" event={"ID":"4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd","Type":"ContainerDied","Data":"3fc2677d17d451e23f46a4d0a967da643d410a02a2de4dfeded58b33e5d9b04d"} Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.244075 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fc2677d17d451e23f46a4d0a967da643d410a02a2de4dfeded58b33e5d9b04d" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.243935 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.325251 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89"] Feb 28 09:39:36 crc kubenswrapper[4996]: E0228 09:39:36.325676 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.325694 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.325945 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.326604 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.334233 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89"] Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.357406 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.357547 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.357700 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.357924 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.358239 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.526644 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.526727 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.526857 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.526902 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzh2f\" (UniqueName: \"kubernetes.io/projected/1fbdc43d-b502-4eca-9040-604271ec1f6e-kube-api-access-wzh2f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.628815 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.628930 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzh2f\" (UniqueName: \"kubernetes.io/projected/1fbdc43d-b502-4eca-9040-604271ec1f6e-kube-api-access-wzh2f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.629116 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.629198 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.637055 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.637061 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.637136 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.667393 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzh2f\" (UniqueName: \"kubernetes.io/projected/1fbdc43d-b502-4eca-9040-604271ec1f6e-kube-api-access-wzh2f\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-64g89\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:36 crc kubenswrapper[4996]: I0228 09:39:36.685541 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:39:37 crc kubenswrapper[4996]: I0228 09:39:37.349348 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89"] Feb 28 09:39:37 crc kubenswrapper[4996]: I0228 09:39:37.791575 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:39:38 crc kubenswrapper[4996]: I0228 09:39:38.263522 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" event={"ID":"1fbdc43d-b502-4eca-9040-604271ec1f6e","Type":"ContainerStarted","Data":"08ecaf988620260179d8db36da03bb33b067928d10d8f748c6b7ae51c4326c1d"} Feb 28 09:39:38 crc kubenswrapper[4996]: I0228 09:39:38.263801 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" event={"ID":"1fbdc43d-b502-4eca-9040-604271ec1f6e","Type":"ContainerStarted","Data":"ae1f6ab3d7ed4aac76bee551e7beb23cb903bf7d26c059963ee67b4c2da89f35"} Feb 28 09:39:38 crc kubenswrapper[4996]: I0228 09:39:38.308188 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" podStartSLOduration=1.882150633 podStartE2EDuration="2.308151046s" podCreationTimestamp="2026-02-28 09:39:36 +0000 UTC" firstStartedPulling="2026-02-28 09:39:37.362228195 +0000 UTC m=+2341.053031006" lastFinishedPulling="2026-02-28 09:39:37.788228568 +0000 UTC m=+2341.479031419" observedRunningTime="2026-02-28 09:39:38.281928795 +0000 UTC m=+2341.972731646" watchObservedRunningTime="2026-02-28 09:39:38.308151046 +0000 UTC m=+2341.998953937" Feb 28 09:39:43 crc kubenswrapper[4996]: I0228 09:39:43.036626 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:39:43 crc kubenswrapper[4996]: E0228 09:39:43.037647 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:39:58 crc kubenswrapper[4996]: I0228 09:39:58.033654 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:39:58 crc kubenswrapper[4996]: E0228 09:39:58.034594 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.142623 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537860-5rcwq"] Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.144071 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537860-5rcwq" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.147065 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.147216 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.147262 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.154144 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537860-5rcwq"] Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.226776 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvkm\" (UniqueName: \"kubernetes.io/projected/c05758a8-a3e2-43c8-80b3-1cf42027c11e-kube-api-access-7dvkm\") pod \"auto-csr-approver-29537860-5rcwq\" (UID: \"c05758a8-a3e2-43c8-80b3-1cf42027c11e\") " pod="openshift-infra/auto-csr-approver-29537860-5rcwq" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.328397 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvkm\" (UniqueName: \"kubernetes.io/projected/c05758a8-a3e2-43c8-80b3-1cf42027c11e-kube-api-access-7dvkm\") pod \"auto-csr-approver-29537860-5rcwq\" (UID: \"c05758a8-a3e2-43c8-80b3-1cf42027c11e\") " pod="openshift-infra/auto-csr-approver-29537860-5rcwq" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.348962 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvkm\" (UniqueName: \"kubernetes.io/projected/c05758a8-a3e2-43c8-80b3-1cf42027c11e-kube-api-access-7dvkm\") pod \"auto-csr-approver-29537860-5rcwq\" (UID: \"c05758a8-a3e2-43c8-80b3-1cf42027c11e\") " pod="openshift-infra/auto-csr-approver-29537860-5rcwq" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.505672 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537860-5rcwq" Feb 28 09:40:00 crc kubenswrapper[4996]: I0228 09:40:00.953671 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537860-5rcwq"] Feb 28 09:40:01 crc kubenswrapper[4996]: I0228 09:40:01.492369 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537860-5rcwq" event={"ID":"c05758a8-a3e2-43c8-80b3-1cf42027c11e","Type":"ContainerStarted","Data":"d7d214e2aec6b26331c1cb92217b03d84d503821e0e27129d5a7bd3c57fe013b"} Feb 28 09:40:03 crc kubenswrapper[4996]: I0228 09:40:03.513208 4996 generic.go:334] "Generic (PLEG): container finished" podID="c05758a8-a3e2-43c8-80b3-1cf42027c11e" containerID="db1540672614519805cdb32450b43e178c08c6cc4ed200117c6eaa189763358e" exitCode=0 Feb 28 09:40:03 crc kubenswrapper[4996]: I0228 09:40:03.513442 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537860-5rcwq" event={"ID":"c05758a8-a3e2-43c8-80b3-1cf42027c11e","Type":"ContainerDied","Data":"db1540672614519805cdb32450b43e178c08c6cc4ed200117c6eaa189763358e"} Feb 28 09:40:04 crc kubenswrapper[4996]: I0228 09:40:04.903750 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537860-5rcwq" Feb 28 09:40:04 crc kubenswrapper[4996]: I0228 09:40:04.916965 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dvkm\" (UniqueName: \"kubernetes.io/projected/c05758a8-a3e2-43c8-80b3-1cf42027c11e-kube-api-access-7dvkm\") pod \"c05758a8-a3e2-43c8-80b3-1cf42027c11e\" (UID: \"c05758a8-a3e2-43c8-80b3-1cf42027c11e\") " Feb 28 09:40:04 crc kubenswrapper[4996]: I0228 09:40:04.930154 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05758a8-a3e2-43c8-80b3-1cf42027c11e-kube-api-access-7dvkm" (OuterVolumeSpecName: "kube-api-access-7dvkm") pod "c05758a8-a3e2-43c8-80b3-1cf42027c11e" (UID: "c05758a8-a3e2-43c8-80b3-1cf42027c11e"). InnerVolumeSpecName "kube-api-access-7dvkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:40:05 crc kubenswrapper[4996]: I0228 09:40:05.019599 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dvkm\" (UniqueName: \"kubernetes.io/projected/c05758a8-a3e2-43c8-80b3-1cf42027c11e-kube-api-access-7dvkm\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:05 crc kubenswrapper[4996]: I0228 09:40:05.532194 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537860-5rcwq" event={"ID":"c05758a8-a3e2-43c8-80b3-1cf42027c11e","Type":"ContainerDied","Data":"d7d214e2aec6b26331c1cb92217b03d84d503821e0e27129d5a7bd3c57fe013b"} Feb 28 09:40:05 crc kubenswrapper[4996]: I0228 09:40:05.532609 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d214e2aec6b26331c1cb92217b03d84d503821e0e27129d5a7bd3c57fe013b" Feb 28 09:40:05 crc kubenswrapper[4996]: I0228 09:40:05.532283 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537860-5rcwq" Feb 28 09:40:05 crc kubenswrapper[4996]: I0228 09:40:05.987766 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537854-hkcxw"] Feb 28 09:40:05 crc kubenswrapper[4996]: I0228 09:40:05.995451 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537854-hkcxw"] Feb 28 09:40:07 crc kubenswrapper[4996]: I0228 09:40:07.045885 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02749d81-6850-4494-9008-f509990982ce" path="/var/lib/kubelet/pods/02749d81-6850-4494-9008-f509990982ce/volumes" Feb 28 09:40:10 crc kubenswrapper[4996]: I0228 09:40:10.034585 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:40:10 crc kubenswrapper[4996]: E0228 09:40:10.035211 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:40:21 crc kubenswrapper[4996]: I0228 09:40:21.680063 4996 generic.go:334] "Generic (PLEG): container finished" podID="1fbdc43d-b502-4eca-9040-604271ec1f6e" containerID="08ecaf988620260179d8db36da03bb33b067928d10d8f748c6b7ae51c4326c1d" exitCode=0 Feb 28 09:40:21 crc kubenswrapper[4996]: I0228 09:40:21.680131 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" event={"ID":"1fbdc43d-b502-4eca-9040-604271ec1f6e","Type":"ContainerDied","Data":"08ecaf988620260179d8db36da03bb33b067928d10d8f748c6b7ae51c4326c1d"} Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.035370 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:40:23 crc kubenswrapper[4996]: E0228 09:40:23.035901 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.104927 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.281356 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ceph\") pod \"1fbdc43d-b502-4eca-9040-604271ec1f6e\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.281511 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzh2f\" (UniqueName: \"kubernetes.io/projected/1fbdc43d-b502-4eca-9040-604271ec1f6e-kube-api-access-wzh2f\") pod \"1fbdc43d-b502-4eca-9040-604271ec1f6e\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.281588 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ssh-key-openstack-edpm-ipam\") pod \"1fbdc43d-b502-4eca-9040-604271ec1f6e\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.281618 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-inventory\") pod \"1fbdc43d-b502-4eca-9040-604271ec1f6e\" (UID: \"1fbdc43d-b502-4eca-9040-604271ec1f6e\") " Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.292375 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ceph" (OuterVolumeSpecName: "ceph") pod "1fbdc43d-b502-4eca-9040-604271ec1f6e" (UID: "1fbdc43d-b502-4eca-9040-604271ec1f6e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.295823 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fbdc43d-b502-4eca-9040-604271ec1f6e-kube-api-access-wzh2f" (OuterVolumeSpecName: "kube-api-access-wzh2f") pod "1fbdc43d-b502-4eca-9040-604271ec1f6e" (UID: "1fbdc43d-b502-4eca-9040-604271ec1f6e"). InnerVolumeSpecName "kube-api-access-wzh2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.315415 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1fbdc43d-b502-4eca-9040-604271ec1f6e" (UID: "1fbdc43d-b502-4eca-9040-604271ec1f6e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.316421 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-inventory" (OuterVolumeSpecName: "inventory") pod "1fbdc43d-b502-4eca-9040-604271ec1f6e" (UID: "1fbdc43d-b502-4eca-9040-604271ec1f6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.383157 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.383188 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.383197 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1fbdc43d-b502-4eca-9040-604271ec1f6e-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.383207 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzh2f\" (UniqueName: \"kubernetes.io/projected/1fbdc43d-b502-4eca-9040-604271ec1f6e-kube-api-access-wzh2f\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.703284 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" event={"ID":"1fbdc43d-b502-4eca-9040-604271ec1f6e","Type":"ContainerDied","Data":"ae1f6ab3d7ed4aac76bee551e7beb23cb903bf7d26c059963ee67b4c2da89f35"} Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.703565 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae1f6ab3d7ed4aac76bee551e7beb23cb903bf7d26c059963ee67b4c2da89f35" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.703341 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-64g89" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.786733 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-npr5w"] Feb 28 09:40:23 crc kubenswrapper[4996]: E0228 09:40:23.787172 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c05758a8-a3e2-43c8-80b3-1cf42027c11e" containerName="oc" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.787194 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c05758a8-a3e2-43c8-80b3-1cf42027c11e" containerName="oc" Feb 28 09:40:23 crc kubenswrapper[4996]: E0228 09:40:23.787217 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fbdc43d-b502-4eca-9040-604271ec1f6e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.787227 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fbdc43d-b502-4eca-9040-604271ec1f6e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.787435 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05758a8-a3e2-43c8-80b3-1cf42027c11e" containerName="oc" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.787461 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fbdc43d-b502-4eca-9040-604271ec1f6e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.788194 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.791335 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.791514 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.791484 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.792277 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.793506 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.805970 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-npr5w"] Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.890642 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.890727 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.890769 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ceph\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.890942 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76dt\" (UniqueName: \"kubernetes.io/projected/eacfea11-3471-48df-a164-22a498aa7574-kube-api-access-x76dt\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.993234 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x76dt\" (UniqueName: \"kubernetes.io/projected/eacfea11-3471-48df-a164-22a498aa7574-kube-api-access-x76dt\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.994158 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.994239 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.994282 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ceph\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.998191 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ceph\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:23 crc kubenswrapper[4996]: I0228 09:40:23.999459 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:24 crc kubenswrapper[4996]: I0228 09:40:24.000355 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:24 crc kubenswrapper[4996]: I0228 09:40:24.019600 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x76dt\" (UniqueName: \"kubernetes.io/projected/eacfea11-3471-48df-a164-22a498aa7574-kube-api-access-x76dt\") pod \"ssh-known-hosts-edpm-deployment-npr5w\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:24 crc kubenswrapper[4996]: I0228 09:40:24.104185 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:24 crc kubenswrapper[4996]: I0228 09:40:24.653144 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-npr5w"] Feb 28 09:40:24 crc kubenswrapper[4996]: I0228 09:40:24.715794 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" event={"ID":"eacfea11-3471-48df-a164-22a498aa7574","Type":"ContainerStarted","Data":"36be2b4a0894e448bc0f25d0587a61ca6ac15237b4c09162da12c65e73e12325"} Feb 28 09:40:25 crc kubenswrapper[4996]: I0228 09:40:25.725533 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" event={"ID":"eacfea11-3471-48df-a164-22a498aa7574","Type":"ContainerStarted","Data":"b49b351f454467d34641239e058fec7003f8f8492f6cbc14f2c3ab02b91a8bfb"} Feb 28 09:40:25 crc kubenswrapper[4996]: I0228 09:40:25.745457 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" podStartSLOduration=2.272992117 podStartE2EDuration="2.745434376s" podCreationTimestamp="2026-02-28 09:40:23 +0000 UTC" firstStartedPulling="2026-02-28 09:40:24.663028437 +0000 UTC m=+2388.353831238" lastFinishedPulling="2026-02-28 09:40:25.135470686 +0000 UTC m=+2388.826273497" observedRunningTime="2026-02-28 09:40:25.74236416 +0000 UTC m=+2389.433166981" watchObservedRunningTime="2026-02-28 09:40:25.745434376 +0000 UTC m=+2389.436237187" Feb 28 09:40:34 crc kubenswrapper[4996]: I0228 09:40:34.808967 4996 generic.go:334] "Generic (PLEG): container finished" podID="eacfea11-3471-48df-a164-22a498aa7574" containerID="b49b351f454467d34641239e058fec7003f8f8492f6cbc14f2c3ab02b91a8bfb" exitCode=0 Feb 28 09:40:34 crc kubenswrapper[4996]: I0228 09:40:34.809119 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" event={"ID":"eacfea11-3471-48df-a164-22a498aa7574","Type":"ContainerDied","Data":"b49b351f454467d34641239e058fec7003f8f8492f6cbc14f2c3ab02b91a8bfb"} Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.312120 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.446515 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-inventory-0\") pod \"eacfea11-3471-48df-a164-22a498aa7574\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.446835 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ssh-key-openstack-edpm-ipam\") pod \"eacfea11-3471-48df-a164-22a498aa7574\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.446941 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x76dt\" (UniqueName: \"kubernetes.io/projected/eacfea11-3471-48df-a164-22a498aa7574-kube-api-access-x76dt\") pod \"eacfea11-3471-48df-a164-22a498aa7574\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.447078 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ceph\") pod \"eacfea11-3471-48df-a164-22a498aa7574\" (UID: \"eacfea11-3471-48df-a164-22a498aa7574\") " Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.452961 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eacfea11-3471-48df-a164-22a498aa7574-kube-api-access-x76dt" (OuterVolumeSpecName: "kube-api-access-x76dt") pod "eacfea11-3471-48df-a164-22a498aa7574" (UID: "eacfea11-3471-48df-a164-22a498aa7574"). InnerVolumeSpecName "kube-api-access-x76dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.454328 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ceph" (OuterVolumeSpecName: "ceph") pod "eacfea11-3471-48df-a164-22a498aa7574" (UID: "eacfea11-3471-48df-a164-22a498aa7574"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.473633 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "eacfea11-3471-48df-a164-22a498aa7574" (UID: "eacfea11-3471-48df-a164-22a498aa7574"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.481427 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "eacfea11-3471-48df-a164-22a498aa7574" (UID: "eacfea11-3471-48df-a164-22a498aa7574"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.549236 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.549263 4996 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.549275 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/eacfea11-3471-48df-a164-22a498aa7574-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.549283 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x76dt\" (UniqueName: \"kubernetes.io/projected/eacfea11-3471-48df-a164-22a498aa7574-kube-api-access-x76dt\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.838506 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" event={"ID":"eacfea11-3471-48df-a164-22a498aa7574","Type":"ContainerDied","Data":"36be2b4a0894e448bc0f25d0587a61ca6ac15237b4c09162da12c65e73e12325"} Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.838586 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36be2b4a0894e448bc0f25d0587a61ca6ac15237b4c09162da12c65e73e12325" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.838671 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-npr5w" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.904391 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc"] Feb 28 09:40:36 crc kubenswrapper[4996]: E0228 09:40:36.904815 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eacfea11-3471-48df-a164-22a498aa7574" containerName="ssh-known-hosts-edpm-deployment" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.904831 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="eacfea11-3471-48df-a164-22a498aa7574" containerName="ssh-known-hosts-edpm-deployment" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.905092 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="eacfea11-3471-48df-a164-22a498aa7574" containerName="ssh-known-hosts-edpm-deployment" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.905740 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.908519 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.917043 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.917441 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.918734 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.919978 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:40:36 crc kubenswrapper[4996]: I0228 09:40:36.924497 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc"] Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.037658 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:40:37 crc kubenswrapper[4996]: E0228 09:40:37.038068 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.065979 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.066037 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.066061 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99vd\" (UniqueName: \"kubernetes.io/projected/edad127b-e6c2-4b27-add0-60234ee9f1cb-kube-api-access-x99vd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.066296 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.167437 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.167879 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.167915 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.167946 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x99vd\" (UniqueName: \"kubernetes.io/projected/edad127b-e6c2-4b27-add0-60234ee9f1cb-kube-api-access-x99vd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.170241 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.174067 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.174345 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.181645 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.184415 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.184968 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.186744 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x99vd\" (UniqueName: \"kubernetes.io/projected/edad127b-e6c2-4b27-add0-60234ee9f1cb-kube-api-access-x99vd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-k5hwc\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.247227 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.253887 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.819332 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc"] Feb 28 09:40:37 crc kubenswrapper[4996]: I0228 09:40:37.847880 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" event={"ID":"edad127b-e6c2-4b27-add0-60234ee9f1cb","Type":"ContainerStarted","Data":"f54fde2d4f8efbde075d26aaf93a19e150da1b93d91256e28a3412aee9261815"} Feb 28 09:40:38 crc kubenswrapper[4996]: I0228 09:40:38.331498 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:40:38 crc kubenswrapper[4996]: I0228 09:40:38.860448 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" event={"ID":"edad127b-e6c2-4b27-add0-60234ee9f1cb","Type":"ContainerStarted","Data":"d2a05c0aa81591f9b2db3eb8e3a51790d5469217b4d9bd819201bc0fabb81e08"} Feb 28 09:40:38 crc kubenswrapper[4996]: I0228 09:40:38.886675 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" podStartSLOduration=2.390408394 podStartE2EDuration="2.886655365s" podCreationTimestamp="2026-02-28 09:40:36 +0000 UTC" firstStartedPulling="2026-02-28 09:40:37.832398445 +0000 UTC m=+2401.523201266" lastFinishedPulling="2026-02-28 09:40:38.328645396 +0000 UTC m=+2402.019448237" observedRunningTime="2026-02-28 09:40:38.883721653 +0000 UTC m=+2402.574524504" watchObservedRunningTime="2026-02-28 09:40:38.886655365 +0000 UTC m=+2402.577458186" Feb 28 09:40:44 crc kubenswrapper[4996]: I0228 09:40:44.931439 4996 scope.go:117] "RemoveContainer" containerID="7cff99767df48362bf8652d3732ca014cddc1dea97853b80bfb8b28b8183b55f" Feb 28 09:40:46 crc kubenswrapper[4996]: I0228 09:40:46.955784 4996 generic.go:334] "Generic (PLEG): container finished" podID="edad127b-e6c2-4b27-add0-60234ee9f1cb" containerID="d2a05c0aa81591f9b2db3eb8e3a51790d5469217b4d9bd819201bc0fabb81e08" exitCode=0 Feb 28 09:40:46 crc kubenswrapper[4996]: I0228 09:40:46.956332 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" event={"ID":"edad127b-e6c2-4b27-add0-60234ee9f1cb","Type":"ContainerDied","Data":"d2a05c0aa81591f9b2db3eb8e3a51790d5469217b4d9bd819201bc0fabb81e08"} Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.491682 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.617767 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x99vd\" (UniqueName: \"kubernetes.io/projected/edad127b-e6c2-4b27-add0-60234ee9f1cb-kube-api-access-x99vd\") pod \"edad127b-e6c2-4b27-add0-60234ee9f1cb\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.617847 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-inventory\") pod \"edad127b-e6c2-4b27-add0-60234ee9f1cb\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.618021 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ceph\") pod \"edad127b-e6c2-4b27-add0-60234ee9f1cb\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.618050 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ssh-key-openstack-edpm-ipam\") pod \"edad127b-e6c2-4b27-add0-60234ee9f1cb\" (UID: \"edad127b-e6c2-4b27-add0-60234ee9f1cb\") " Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.633513 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edad127b-e6c2-4b27-add0-60234ee9f1cb-kube-api-access-x99vd" (OuterVolumeSpecName: "kube-api-access-x99vd") pod "edad127b-e6c2-4b27-add0-60234ee9f1cb" (UID: "edad127b-e6c2-4b27-add0-60234ee9f1cb"). InnerVolumeSpecName "kube-api-access-x99vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.640944 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ceph" (OuterVolumeSpecName: "ceph") pod "edad127b-e6c2-4b27-add0-60234ee9f1cb" (UID: "edad127b-e6c2-4b27-add0-60234ee9f1cb"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.666430 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "edad127b-e6c2-4b27-add0-60234ee9f1cb" (UID: "edad127b-e6c2-4b27-add0-60234ee9f1cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.693403 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-inventory" (OuterVolumeSpecName: "inventory") pod "edad127b-e6c2-4b27-add0-60234ee9f1cb" (UID: "edad127b-e6c2-4b27-add0-60234ee9f1cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.720527 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x99vd\" (UniqueName: \"kubernetes.io/projected/edad127b-e6c2-4b27-add0-60234ee9f1cb-kube-api-access-x99vd\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.720590 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.720618 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.720648 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/edad127b-e6c2-4b27-add0-60234ee9f1cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.977755 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" event={"ID":"edad127b-e6c2-4b27-add0-60234ee9f1cb","Type":"ContainerDied","Data":"f54fde2d4f8efbde075d26aaf93a19e150da1b93d91256e28a3412aee9261815"} Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.977797 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f54fde2d4f8efbde075d26aaf93a19e150da1b93d91256e28a3412aee9261815" Feb 28 09:40:48 crc kubenswrapper[4996]: I0228 09:40:48.977838 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-k5hwc" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.062066 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f"] Feb 28 09:40:49 crc kubenswrapper[4996]: E0228 09:40:49.062420 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edad127b-e6c2-4b27-add0-60234ee9f1cb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.062438 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="edad127b-e6c2-4b27-add0-60234ee9f1cb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.062605 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="edad127b-e6c2-4b27-add0-60234ee9f1cb" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.063161 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.067532 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.067628 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.067932 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.068103 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.068313 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.090391 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f"] Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.128703 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.128956 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htqn5\" (UniqueName: \"kubernetes.io/projected/b2cd442e-b51b-41cc-a664-fead95314ada-kube-api-access-htqn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.129231 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.129419 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.232206 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.234423 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htqn5\" (UniqueName: \"kubernetes.io/projected/b2cd442e-b51b-41cc-a664-fead95314ada-kube-api-access-htqn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.234538 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.235182 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.237497 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.238983 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.240180 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.250501 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htqn5\" (UniqueName: \"kubernetes.io/projected/b2cd442e-b51b-41cc-a664-fead95314ada-kube-api-access-htqn5\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.386136 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.961909 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f"] Feb 28 09:40:49 crc kubenswrapper[4996]: W0228 09:40:49.969083 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2cd442e_b51b_41cc_a664_fead95314ada.slice/crio-3697268559c56a408c2a09856a7abb6b9b044e9f4ff5a2873c1af217676c462e WatchSource:0}: Error finding container 3697268559c56a408c2a09856a7abb6b9b044e9f4ff5a2873c1af217676c462e: Status 404 returned error can't find the container with id 3697268559c56a408c2a09856a7abb6b9b044e9f4ff5a2873c1af217676c462e Feb 28 09:40:49 crc kubenswrapper[4996]: I0228 09:40:49.987654 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" event={"ID":"b2cd442e-b51b-41cc-a664-fead95314ada","Type":"ContainerStarted","Data":"3697268559c56a408c2a09856a7abb6b9b044e9f4ff5a2873c1af217676c462e"} Feb 28 09:40:50 crc kubenswrapper[4996]: I0228 09:40:50.998831 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" event={"ID":"b2cd442e-b51b-41cc-a664-fead95314ada","Type":"ContainerStarted","Data":"cf2af09edae952eb30d3215e400c648a1059dfe8b3793eed344f2e00e186af25"} Feb 28 09:40:51 crc kubenswrapper[4996]: I0228 09:40:51.034617 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" podStartSLOduration=1.649170842 podStartE2EDuration="2.034589574s" podCreationTimestamp="2026-02-28 09:40:49 +0000 UTC" firstStartedPulling="2026-02-28 09:40:49.971641332 +0000 UTC m=+2413.662444153" lastFinishedPulling="2026-02-28 09:40:50.357060064 +0000 UTC m=+2414.047862885" observedRunningTime="2026-02-28 09:40:51.027878221 +0000 UTC m=+2414.718681052" watchObservedRunningTime="2026-02-28 09:40:51.034589574 +0000 UTC m=+2414.725392435" Feb 28 09:40:52 crc kubenswrapper[4996]: I0228 09:40:52.033583 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:40:52 crc kubenswrapper[4996]: E0228 09:40:52.035215 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:41:01 crc kubenswrapper[4996]: I0228 09:41:01.103254 4996 generic.go:334] "Generic (PLEG): container finished" podID="b2cd442e-b51b-41cc-a664-fead95314ada" containerID="cf2af09edae952eb30d3215e400c648a1059dfe8b3793eed344f2e00e186af25" exitCode=0 Feb 28 09:41:01 crc kubenswrapper[4996]: I0228 09:41:01.103288 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" event={"ID":"b2cd442e-b51b-41cc-a664-fead95314ada","Type":"ContainerDied","Data":"cf2af09edae952eb30d3215e400c648a1059dfe8b3793eed344f2e00e186af25"} Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.598246 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.755433 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ssh-key-openstack-edpm-ipam\") pod \"b2cd442e-b51b-41cc-a664-fead95314ada\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.755603 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-inventory\") pod \"b2cd442e-b51b-41cc-a664-fead95314ada\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.755690 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ceph\") pod \"b2cd442e-b51b-41cc-a664-fead95314ada\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.755917 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htqn5\" (UniqueName: \"kubernetes.io/projected/b2cd442e-b51b-41cc-a664-fead95314ada-kube-api-access-htqn5\") pod \"b2cd442e-b51b-41cc-a664-fead95314ada\" (UID: \"b2cd442e-b51b-41cc-a664-fead95314ada\") " Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.763359 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ceph" (OuterVolumeSpecName: "ceph") pod "b2cd442e-b51b-41cc-a664-fead95314ada" (UID: "b2cd442e-b51b-41cc-a664-fead95314ada"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.766233 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2cd442e-b51b-41cc-a664-fead95314ada-kube-api-access-htqn5" (OuterVolumeSpecName: "kube-api-access-htqn5") pod "b2cd442e-b51b-41cc-a664-fead95314ada" (UID: "b2cd442e-b51b-41cc-a664-fead95314ada"). InnerVolumeSpecName "kube-api-access-htqn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.791061 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b2cd442e-b51b-41cc-a664-fead95314ada" (UID: "b2cd442e-b51b-41cc-a664-fead95314ada"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.811468 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-inventory" (OuterVolumeSpecName: "inventory") pod "b2cd442e-b51b-41cc-a664-fead95314ada" (UID: "b2cd442e-b51b-41cc-a664-fead95314ada"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.858654 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.858729 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htqn5\" (UniqueName: \"kubernetes.io/projected/b2cd442e-b51b-41cc-a664-fead95314ada-kube-api-access-htqn5\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.858756 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:02 crc kubenswrapper[4996]: I0228 09:41:02.858781 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cd442e-b51b-41cc-a664-fead95314ada-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.121232 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" event={"ID":"b2cd442e-b51b-41cc-a664-fead95314ada","Type":"ContainerDied","Data":"3697268559c56a408c2a09856a7abb6b9b044e9f4ff5a2873c1af217676c462e"} Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.121297 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3697268559c56a408c2a09856a7abb6b9b044e9f4ff5a2873c1af217676c462e" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.121381 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.242441 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v"] Feb 28 09:41:03 crc kubenswrapper[4996]: E0228 09:41:03.243692 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2cd442e-b51b-41cc-a664-fead95314ada" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.243725 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2cd442e-b51b-41cc-a664-fead95314ada" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.245099 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2cd442e-b51b-41cc-a664-fead95314ada" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.247248 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.256667 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.256731 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.256822 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.256935 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.257097 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.257492 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.257940 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.258085 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.264571 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v"] Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.373455 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.373498 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.373518 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.373754 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.373917 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.373959 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcchv\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-kube-api-access-bcchv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.374124 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.374184 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.374229 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.374277 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.374398 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.374534 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.374634 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.476861 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.476948 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcchv\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-kube-api-access-bcchv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.476998 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477083 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477129 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477168 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477233 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477306 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477360 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477449 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477492 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477541 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.477741 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.483264 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.483300 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.483714 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.484458 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.485196 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.485706 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.485762 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.485999 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.485997 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.486224 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.487264 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.487685 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.503667 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcchv\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-kube-api-access-bcchv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:03 crc kubenswrapper[4996]: I0228 09:41:03.612854 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:04 crc kubenswrapper[4996]: I0228 09:41:04.163451 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v"] Feb 28 09:41:05 crc kubenswrapper[4996]: I0228 09:41:05.140934 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" event={"ID":"645c2ca2-c74e-44d7-a0e7-6f161b14aa55","Type":"ContainerStarted","Data":"f9ae75cec46636911c87047151cde04b76222d8741c4605a59623df6e480d070"} Feb 28 09:41:05 crc kubenswrapper[4996]: I0228 09:41:05.144087 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" event={"ID":"645c2ca2-c74e-44d7-a0e7-6f161b14aa55","Type":"ContainerStarted","Data":"d9505601403d998881464988f183e659386cf688ab610b941ecced611ae618e2"} Feb 28 09:41:05 crc kubenswrapper[4996]: I0228 09:41:05.179683 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" podStartSLOduration=1.761336636 podStartE2EDuration="2.179654921s" podCreationTimestamp="2026-02-28 09:41:03 +0000 UTC" firstStartedPulling="2026-02-28 09:41:04.165742518 +0000 UTC m=+2427.856545329" lastFinishedPulling="2026-02-28 09:41:04.584060763 +0000 UTC m=+2428.274863614" observedRunningTime="2026-02-28 09:41:05.1722421 +0000 UTC m=+2428.863044921" watchObservedRunningTime="2026-02-28 09:41:05.179654921 +0000 UTC m=+2428.870457772" Feb 28 09:41:07 crc kubenswrapper[4996]: I0228 09:41:07.039501 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:41:07 crc kubenswrapper[4996]: E0228 09:41:07.040050 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:41:20 crc kubenswrapper[4996]: I0228 09:41:20.033587 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:41:20 crc kubenswrapper[4996]: E0228 09:41:20.034799 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:41:32 crc kubenswrapper[4996]: I0228 09:41:32.039036 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:41:32 crc kubenswrapper[4996]: E0228 09:41:32.040829 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:41:36 crc kubenswrapper[4996]: I0228 09:41:36.493062 4996 generic.go:334] "Generic (PLEG): container finished" podID="645c2ca2-c74e-44d7-a0e7-6f161b14aa55" containerID="f9ae75cec46636911c87047151cde04b76222d8741c4605a59623df6e480d070" exitCode=0 Feb 28 09:41:36 crc kubenswrapper[4996]: I0228 09:41:36.493156 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" event={"ID":"645c2ca2-c74e-44d7-a0e7-6f161b14aa55","Type":"ContainerDied","Data":"f9ae75cec46636911c87047151cde04b76222d8741c4605a59623df6e480d070"} Feb 28 09:41:37 crc kubenswrapper[4996]: I0228 09:41:37.960391 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.118383 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-neutron-metadata-combined-ca-bundle\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.118895 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ssh-key-openstack-edpm-ipam\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.118991 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-nova-combined-ca-bundle\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.119102 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.119164 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-bootstrap-combined-ca-bundle\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.119236 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ovn-combined-ca-bundle\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.119312 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-inventory\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.119370 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-repo-setup-combined-ca-bundle\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.119421 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ceph\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.119653 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcchv\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-kube-api-access-bcchv\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.119883 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.120447 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-libvirt-combined-ca-bundle\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.120551 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-ovn-default-certs-0\") pod \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\" (UID: \"645c2ca2-c74e-44d7-a0e7-6f161b14aa55\") " Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.126228 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.126423 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ceph" (OuterVolumeSpecName: "ceph") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.126524 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.129540 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.130097 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.131465 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-kube-api-access-bcchv" (OuterVolumeSpecName: "kube-api-access-bcchv") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "kube-api-access-bcchv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.131519 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.132733 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.134410 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.134507 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.138322 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.152737 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-inventory" (OuterVolumeSpecName: "inventory") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.156404 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "645c2ca2-c74e-44d7-a0e7-6f161b14aa55" (UID: "645c2ca2-c74e-44d7-a0e7-6f161b14aa55"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224350 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224402 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcchv\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-kube-api-access-bcchv\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224413 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224423 4996 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224435 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224444 4996 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224452 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224464 4996 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224477 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224488 4996 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224500 4996 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224511 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.224520 4996 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/645c2ca2-c74e-44d7-a0e7-6f161b14aa55-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.525475 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" event={"ID":"645c2ca2-c74e-44d7-a0e7-6f161b14aa55","Type":"ContainerDied","Data":"d9505601403d998881464988f183e659386cf688ab610b941ecced611ae618e2"} Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.525527 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9505601403d998881464988f183e659386cf688ab610b941ecced611ae618e2" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.525538 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.648916 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh"] Feb 28 09:41:38 crc kubenswrapper[4996]: E0228 09:41:38.649325 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645c2ca2-c74e-44d7-a0e7-6f161b14aa55" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.649346 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="645c2ca2-c74e-44d7-a0e7-6f161b14aa55" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.649593 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="645c2ca2-c74e-44d7-a0e7-6f161b14aa55" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.650312 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.654261 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.654681 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.655946 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.656043 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.656671 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.682499 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh"] Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.733322 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.733434 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.733485 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.733599 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8wjz\" (UniqueName: \"kubernetes.io/projected/f0ea0b93-3364-4191-b14e-6ad457132874-kube-api-access-w8wjz\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.834697 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8wjz\" (UniqueName: \"kubernetes.io/projected/f0ea0b93-3364-4191-b14e-6ad457132874-kube-api-access-w8wjz\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.834991 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.835056 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.835080 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.838777 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.839185 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.842851 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.856384 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8wjz\" (UniqueName: \"kubernetes.io/projected/f0ea0b93-3364-4191-b14e-6ad457132874-kube-api-access-w8wjz\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:38 crc kubenswrapper[4996]: I0228 09:41:38.969083 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:40 crc kubenswrapper[4996]: I0228 09:41:40.030672 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh"] Feb 28 09:41:40 crc kubenswrapper[4996]: W0228 09:41:40.036280 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0ea0b93_3364_4191_b14e_6ad457132874.slice/crio-99782e6a82dacb317c339bfbd7ea2ca77cf8ac2cb4c7359f1cf62f01ae4c2e6b WatchSource:0}: Error finding container 99782e6a82dacb317c339bfbd7ea2ca77cf8ac2cb4c7359f1cf62f01ae4c2e6b: Status 404 returned error can't find the container with id 99782e6a82dacb317c339bfbd7ea2ca77cf8ac2cb4c7359f1cf62f01ae4c2e6b Feb 28 09:41:40 crc kubenswrapper[4996]: I0228 09:41:40.038591 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:41:40 crc kubenswrapper[4996]: I0228 09:41:40.549552 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" event={"ID":"f0ea0b93-3364-4191-b14e-6ad457132874","Type":"ContainerStarted","Data":"99782e6a82dacb317c339bfbd7ea2ca77cf8ac2cb4c7359f1cf62f01ae4c2e6b"} Feb 28 09:41:41 crc kubenswrapper[4996]: I0228 09:41:41.563526 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" event={"ID":"f0ea0b93-3364-4191-b14e-6ad457132874","Type":"ContainerStarted","Data":"6c0526a9b64a36f05b41c1169df8ff41370b9233847daa25e65d48374b2aa500"} Feb 28 09:41:41 crc kubenswrapper[4996]: I0228 09:41:41.602920 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" podStartSLOduration=2.890674758 podStartE2EDuration="3.602892267s" podCreationTimestamp="2026-02-28 09:41:38 +0000 UTC" firstStartedPulling="2026-02-28 09:41:40.038411005 +0000 UTC m=+2463.729213816" lastFinishedPulling="2026-02-28 09:41:40.750628504 +0000 UTC m=+2464.441431325" observedRunningTime="2026-02-28 09:41:41.58955925 +0000 UTC m=+2465.280362071" watchObservedRunningTime="2026-02-28 09:41:41.602892267 +0000 UTC m=+2465.293695108" Feb 28 09:41:46 crc kubenswrapper[4996]: I0228 09:41:46.033255 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:41:46 crc kubenswrapper[4996]: E0228 09:41:46.034406 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:41:46 crc kubenswrapper[4996]: I0228 09:41:46.628615 4996 generic.go:334] "Generic (PLEG): container finished" podID="f0ea0b93-3364-4191-b14e-6ad457132874" containerID="6c0526a9b64a36f05b41c1169df8ff41370b9233847daa25e65d48374b2aa500" exitCode=0 Feb 28 09:41:46 crc kubenswrapper[4996]: I0228 09:41:46.628678 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" event={"ID":"f0ea0b93-3364-4191-b14e-6ad457132874","Type":"ContainerDied","Data":"6c0526a9b64a36f05b41c1169df8ff41370b9233847daa25e65d48374b2aa500"} Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.134665 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.224348 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ceph\") pod \"f0ea0b93-3364-4191-b14e-6ad457132874\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.224619 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ssh-key-openstack-edpm-ipam\") pod \"f0ea0b93-3364-4191-b14e-6ad457132874\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.224698 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-inventory\") pod \"f0ea0b93-3364-4191-b14e-6ad457132874\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.224728 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8wjz\" (UniqueName: \"kubernetes.io/projected/f0ea0b93-3364-4191-b14e-6ad457132874-kube-api-access-w8wjz\") pod \"f0ea0b93-3364-4191-b14e-6ad457132874\" (UID: \"f0ea0b93-3364-4191-b14e-6ad457132874\") " Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.230839 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0ea0b93-3364-4191-b14e-6ad457132874-kube-api-access-w8wjz" (OuterVolumeSpecName: "kube-api-access-w8wjz") pod "f0ea0b93-3364-4191-b14e-6ad457132874" (UID: "f0ea0b93-3364-4191-b14e-6ad457132874"). InnerVolumeSpecName "kube-api-access-w8wjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.237287 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ceph" (OuterVolumeSpecName: "ceph") pod "f0ea0b93-3364-4191-b14e-6ad457132874" (UID: "f0ea0b93-3364-4191-b14e-6ad457132874"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.257587 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f0ea0b93-3364-4191-b14e-6ad457132874" (UID: "f0ea0b93-3364-4191-b14e-6ad457132874"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.264507 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-inventory" (OuterVolumeSpecName: "inventory") pod "f0ea0b93-3364-4191-b14e-6ad457132874" (UID: "f0ea0b93-3364-4191-b14e-6ad457132874"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.327695 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.327742 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8wjz\" (UniqueName: \"kubernetes.io/projected/f0ea0b93-3364-4191-b14e-6ad457132874-kube-api-access-w8wjz\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.327762 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.327778 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0ea0b93-3364-4191-b14e-6ad457132874-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.667726 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" event={"ID":"f0ea0b93-3364-4191-b14e-6ad457132874","Type":"ContainerDied","Data":"99782e6a82dacb317c339bfbd7ea2ca77cf8ac2cb4c7359f1cf62f01ae4c2e6b"} Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.667790 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99782e6a82dacb317c339bfbd7ea2ca77cf8ac2cb4c7359f1cf62f01ae4c2e6b" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.667871 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.775433 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97"] Feb 28 09:41:48 crc kubenswrapper[4996]: E0228 09:41:48.775890 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ea0b93-3364-4191-b14e-6ad457132874" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.775906 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ea0b93-3364-4191-b14e-6ad457132874" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.776280 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ea0b93-3364-4191-b14e-6ad457132874" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.776985 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.784838 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.785032 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.785355 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.785388 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.785357 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.785542 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.793354 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97"] Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.861254 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.861344 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/688e7207-5681-405d-9548-9c8d753b28e1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.861440 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4zxz\" (UniqueName: \"kubernetes.io/projected/688e7207-5681-405d-9548-9c8d753b28e1-kube-api-access-x4zxz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.861503 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.861602 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.861779 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.963120 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.963187 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.963273 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/688e7207-5681-405d-9548-9c8d753b28e1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.963298 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4zxz\" (UniqueName: \"kubernetes.io/projected/688e7207-5681-405d-9548-9c8d753b28e1-kube-api-access-x4zxz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.963338 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.963372 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.965397 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/688e7207-5681-405d-9548-9c8d753b28e1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.968346 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.969405 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.971087 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.973058 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:48 crc kubenswrapper[4996]: I0228 09:41:48.992683 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4zxz\" (UniqueName: \"kubernetes.io/projected/688e7207-5681-405d-9548-9c8d753b28e1-kube-api-access-x4zxz\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-f8v97\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:49 crc kubenswrapper[4996]: I0228 09:41:49.107241 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:41:49 crc kubenswrapper[4996]: I0228 09:41:49.697854 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97"] Feb 28 09:41:49 crc kubenswrapper[4996]: W0228 09:41:49.706386 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod688e7207_5681_405d_9548_9c8d753b28e1.slice/crio-4b88602477650e9210501d22d821c26061a4ef9ed37ac452daff52d89e214e7a WatchSource:0}: Error finding container 4b88602477650e9210501d22d821c26061a4ef9ed37ac452daff52d89e214e7a: Status 404 returned error can't find the container with id 4b88602477650e9210501d22d821c26061a4ef9ed37ac452daff52d89e214e7a Feb 28 09:41:50 crc kubenswrapper[4996]: I0228 09:41:50.692068 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" event={"ID":"688e7207-5681-405d-9548-9c8d753b28e1","Type":"ContainerStarted","Data":"e1ebb542ec83dd06cf273fc110997f576bfff3b440b3d191a5fda34e797d385e"} Feb 28 09:41:50 crc kubenswrapper[4996]: I0228 09:41:50.692447 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" event={"ID":"688e7207-5681-405d-9548-9c8d753b28e1","Type":"ContainerStarted","Data":"4b88602477650e9210501d22d821c26061a4ef9ed37ac452daff52d89e214e7a"} Feb 28 09:41:50 crc kubenswrapper[4996]: I0228 09:41:50.719867 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" podStartSLOduration=2.2923615379999998 podStartE2EDuration="2.719849677s" podCreationTimestamp="2026-02-28 09:41:48 +0000 UTC" firstStartedPulling="2026-02-28 09:41:49.709663625 +0000 UTC m=+2473.400466446" lastFinishedPulling="2026-02-28 09:41:50.137151774 +0000 UTC m=+2473.827954585" observedRunningTime="2026-02-28 09:41:50.718118035 +0000 UTC m=+2474.408920856" watchObservedRunningTime="2026-02-28 09:41:50.719849677 +0000 UTC m=+2474.410652488" Feb 28 09:41:57 crc kubenswrapper[4996]: I0228 09:41:57.040827 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:41:57 crc kubenswrapper[4996]: E0228 09:41:57.041739 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.140928 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537862-4jzq4"] Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.142817 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537862-4jzq4" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.145560 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.146037 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.146870 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.150326 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537862-4jzq4"] Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.292101 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shsnh\" (UniqueName: \"kubernetes.io/projected/d427abd7-9d44-47d7-98dd-b77e6daec678-kube-api-access-shsnh\") pod \"auto-csr-approver-29537862-4jzq4\" (UID: \"d427abd7-9d44-47d7-98dd-b77e6daec678\") " pod="openshift-infra/auto-csr-approver-29537862-4jzq4" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.393674 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shsnh\" (UniqueName: \"kubernetes.io/projected/d427abd7-9d44-47d7-98dd-b77e6daec678-kube-api-access-shsnh\") pod \"auto-csr-approver-29537862-4jzq4\" (UID: \"d427abd7-9d44-47d7-98dd-b77e6daec678\") " pod="openshift-infra/auto-csr-approver-29537862-4jzq4" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.416322 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shsnh\" (UniqueName: \"kubernetes.io/projected/d427abd7-9d44-47d7-98dd-b77e6daec678-kube-api-access-shsnh\") pod \"auto-csr-approver-29537862-4jzq4\" (UID: \"d427abd7-9d44-47d7-98dd-b77e6daec678\") " pod="openshift-infra/auto-csr-approver-29537862-4jzq4" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.463902 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537862-4jzq4" Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.732103 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537862-4jzq4"] Feb 28 09:42:00 crc kubenswrapper[4996]: I0228 09:42:00.800963 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537862-4jzq4" event={"ID":"d427abd7-9d44-47d7-98dd-b77e6daec678","Type":"ContainerStarted","Data":"a0c555f7987ba0ec80dbd9a8a195d144f630e120cc3f36fe8141a54c5e68620f"} Feb 28 09:42:02 crc kubenswrapper[4996]: I0228 09:42:02.819359 4996 generic.go:334] "Generic (PLEG): container finished" podID="d427abd7-9d44-47d7-98dd-b77e6daec678" containerID="7ec067d8beb7f46c57ad8e14b95cd1df6c1ab87674d6105ce94017b59fb99d74" exitCode=0 Feb 28 09:42:02 crc kubenswrapper[4996]: I0228 09:42:02.819441 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537862-4jzq4" event={"ID":"d427abd7-9d44-47d7-98dd-b77e6daec678","Type":"ContainerDied","Data":"7ec067d8beb7f46c57ad8e14b95cd1df6c1ab87674d6105ce94017b59fb99d74"} Feb 28 09:42:04 crc kubenswrapper[4996]: I0228 09:42:04.267504 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537862-4jzq4" Feb 28 09:42:04 crc kubenswrapper[4996]: I0228 09:42:04.418138 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shsnh\" (UniqueName: \"kubernetes.io/projected/d427abd7-9d44-47d7-98dd-b77e6daec678-kube-api-access-shsnh\") pod \"d427abd7-9d44-47d7-98dd-b77e6daec678\" (UID: \"d427abd7-9d44-47d7-98dd-b77e6daec678\") " Feb 28 09:42:04 crc kubenswrapper[4996]: I0228 09:42:04.426208 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d427abd7-9d44-47d7-98dd-b77e6daec678-kube-api-access-shsnh" (OuterVolumeSpecName: "kube-api-access-shsnh") pod "d427abd7-9d44-47d7-98dd-b77e6daec678" (UID: "d427abd7-9d44-47d7-98dd-b77e6daec678"). InnerVolumeSpecName "kube-api-access-shsnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:42:04 crc kubenswrapper[4996]: I0228 09:42:04.520332 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shsnh\" (UniqueName: \"kubernetes.io/projected/d427abd7-9d44-47d7-98dd-b77e6daec678-kube-api-access-shsnh\") on node \"crc\" DevicePath \"\"" Feb 28 09:42:04 crc kubenswrapper[4996]: I0228 09:42:04.840676 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537862-4jzq4" event={"ID":"d427abd7-9d44-47d7-98dd-b77e6daec678","Type":"ContainerDied","Data":"a0c555f7987ba0ec80dbd9a8a195d144f630e120cc3f36fe8141a54c5e68620f"} Feb 28 09:42:04 crc kubenswrapper[4996]: I0228 09:42:04.841200 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0c555f7987ba0ec80dbd9a8a195d144f630e120cc3f36fe8141a54c5e68620f" Feb 28 09:42:04 crc kubenswrapper[4996]: I0228 09:42:04.840693 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537862-4jzq4" Feb 28 09:42:05 crc kubenswrapper[4996]: I0228 09:42:05.343663 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537856-qt2w5"] Feb 28 09:42:05 crc kubenswrapper[4996]: I0228 09:42:05.351393 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537856-qt2w5"] Feb 28 09:42:07 crc kubenswrapper[4996]: I0228 09:42:07.055996 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5952ed-8b64-4873-a4e3-688e1a36be1f" path="/var/lib/kubelet/pods/4b5952ed-8b64-4873-a4e3-688e1a36be1f/volumes" Feb 28 09:42:08 crc kubenswrapper[4996]: I0228 09:42:08.034443 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:42:08 crc kubenswrapper[4996]: E0228 09:42:08.034991 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:42:23 crc kubenswrapper[4996]: I0228 09:42:23.032714 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:42:23 crc kubenswrapper[4996]: E0228 09:42:23.033472 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:42:38 crc kubenswrapper[4996]: I0228 09:42:38.033620 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:42:38 crc kubenswrapper[4996]: E0228 09:42:38.034959 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:42:45 crc kubenswrapper[4996]: I0228 09:42:45.035241 4996 scope.go:117] "RemoveContainer" containerID="0c643a82282948ae29b5455d135f16150feebdc6383f0c418d8718c6a62782c1" Feb 28 09:42:50 crc kubenswrapper[4996]: I0228 09:42:50.034268 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:42:50 crc kubenswrapper[4996]: E0228 09:42:50.036959 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:42:59 crc kubenswrapper[4996]: I0228 09:42:59.393858 4996 generic.go:334] "Generic (PLEG): container finished" podID="688e7207-5681-405d-9548-9c8d753b28e1" containerID="e1ebb542ec83dd06cf273fc110997f576bfff3b440b3d191a5fda34e797d385e" exitCode=0 Feb 28 09:42:59 crc kubenswrapper[4996]: I0228 09:42:59.393965 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" event={"ID":"688e7207-5681-405d-9548-9c8d753b28e1","Type":"ContainerDied","Data":"e1ebb542ec83dd06cf273fc110997f576bfff3b440b3d191a5fda34e797d385e"} Feb 28 09:43:00 crc kubenswrapper[4996]: I0228 09:43:00.940705 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.011101 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ceph\") pod \"688e7207-5681-405d-9548-9c8d753b28e1\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.011169 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ovn-combined-ca-bundle\") pod \"688e7207-5681-405d-9548-9c8d753b28e1\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.011209 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-inventory\") pod \"688e7207-5681-405d-9548-9c8d753b28e1\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.011269 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zxz\" (UniqueName: \"kubernetes.io/projected/688e7207-5681-405d-9548-9c8d753b28e1-kube-api-access-x4zxz\") pod \"688e7207-5681-405d-9548-9c8d753b28e1\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.011333 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/688e7207-5681-405d-9548-9c8d753b28e1-ovncontroller-config-0\") pod \"688e7207-5681-405d-9548-9c8d753b28e1\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.011356 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ssh-key-openstack-edpm-ipam\") pod \"688e7207-5681-405d-9548-9c8d753b28e1\" (UID: \"688e7207-5681-405d-9548-9c8d753b28e1\") " Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.017239 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "688e7207-5681-405d-9548-9c8d753b28e1" (UID: "688e7207-5681-405d-9548-9c8d753b28e1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.017914 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/688e7207-5681-405d-9548-9c8d753b28e1-kube-api-access-x4zxz" (OuterVolumeSpecName: "kube-api-access-x4zxz") pod "688e7207-5681-405d-9548-9c8d753b28e1" (UID: "688e7207-5681-405d-9548-9c8d753b28e1"). InnerVolumeSpecName "kube-api-access-x4zxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.030295 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ceph" (OuterVolumeSpecName: "ceph") pod "688e7207-5681-405d-9548-9c8d753b28e1" (UID: "688e7207-5681-405d-9548-9c8d753b28e1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.041225 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/688e7207-5681-405d-9548-9c8d753b28e1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "688e7207-5681-405d-9548-9c8d753b28e1" (UID: "688e7207-5681-405d-9548-9c8d753b28e1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.052763 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "688e7207-5681-405d-9548-9c8d753b28e1" (UID: "688e7207-5681-405d-9548-9c8d753b28e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.054277 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-inventory" (OuterVolumeSpecName: "inventory") pod "688e7207-5681-405d-9548-9c8d753b28e1" (UID: "688e7207-5681-405d-9548-9c8d753b28e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.113219 4996 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/688e7207-5681-405d-9548-9c8d753b28e1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.113421 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.113520 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.113586 4996 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.113653 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/688e7207-5681-405d-9548-9c8d753b28e1-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.113720 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zxz\" (UniqueName: \"kubernetes.io/projected/688e7207-5681-405d-9548-9c8d753b28e1-kube-api-access-x4zxz\") on node \"crc\" DevicePath \"\"" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.418646 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" event={"ID":"688e7207-5681-405d-9548-9c8d753b28e1","Type":"ContainerDied","Data":"4b88602477650e9210501d22d821c26061a4ef9ed37ac452daff52d89e214e7a"} Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.418719 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b88602477650e9210501d22d821c26061a4ef9ed37ac452daff52d89e214e7a" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.418719 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-f8v97" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.683783 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt"] Feb 28 09:43:01 crc kubenswrapper[4996]: E0228 09:43:01.684824 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="688e7207-5681-405d-9548-9c8d753b28e1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.684875 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="688e7207-5681-405d-9548-9c8d753b28e1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 09:43:01 crc kubenswrapper[4996]: E0228 09:43:01.685084 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d427abd7-9d44-47d7-98dd-b77e6daec678" containerName="oc" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.685112 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d427abd7-9d44-47d7-98dd-b77e6daec678" containerName="oc" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.685583 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="688e7207-5681-405d-9548-9c8d753b28e1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.685623 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d427abd7-9d44-47d7-98dd-b77e6daec678" containerName="oc" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.686485 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.708128 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt"] Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.708316 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.708776 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.709277 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.709455 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.709640 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.710328 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.710520 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.727801 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dp2p\" (UniqueName: \"kubernetes.io/projected/b17e3d39-7e71-472f-9011-d825c77b005a-kube-api-access-8dp2p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.727862 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.727913 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.727985 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.728205 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.728281 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.728315 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.829720 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.829804 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.829825 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.829865 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dp2p\" (UniqueName: \"kubernetes.io/projected/b17e3d39-7e71-472f-9011-d825c77b005a-kube-api-access-8dp2p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.829886 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.829908 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.829939 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.834993 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.835402 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.835917 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.837537 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.837814 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.838414 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:01 crc kubenswrapper[4996]: I0228 09:43:01.847044 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dp2p\" (UniqueName: \"kubernetes.io/projected/b17e3d39-7e71-472f-9011-d825c77b005a-kube-api-access-8dp2p\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:02 crc kubenswrapper[4996]: I0228 09:43:02.054259 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:43:02 crc kubenswrapper[4996]: I0228 09:43:02.636780 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt"] Feb 28 09:43:02 crc kubenswrapper[4996]: W0228 09:43:02.640079 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb17e3d39_7e71_472f_9011_d825c77b005a.slice/crio-d30557d0f09bb5b3b39863d5adcaf13e7a527af75e410c690889a9a50aef9ea7 WatchSource:0}: Error finding container d30557d0f09bb5b3b39863d5adcaf13e7a527af75e410c690889a9a50aef9ea7: Status 404 returned error can't find the container with id d30557d0f09bb5b3b39863d5adcaf13e7a527af75e410c690889a9a50aef9ea7 Feb 28 09:43:03 crc kubenswrapper[4996]: I0228 09:43:03.448647 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" event={"ID":"b17e3d39-7e71-472f-9011-d825c77b005a","Type":"ContainerStarted","Data":"19c11257396e8f4128e1c3c2304006ce180013a0483bda357743bbbb8c950412"} Feb 28 09:43:03 crc kubenswrapper[4996]: I0228 09:43:03.448998 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" event={"ID":"b17e3d39-7e71-472f-9011-d825c77b005a","Type":"ContainerStarted","Data":"d30557d0f09bb5b3b39863d5adcaf13e7a527af75e410c690889a9a50aef9ea7"} Feb 28 09:43:05 crc kubenswrapper[4996]: I0228 09:43:05.033595 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:43:05 crc kubenswrapper[4996]: E0228 09:43:05.034767 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:43:20 crc kubenswrapper[4996]: I0228 09:43:20.033864 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:43:20 crc kubenswrapper[4996]: E0228 09:43:20.035179 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:43:31 crc kubenswrapper[4996]: I0228 09:43:31.038489 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:43:31 crc kubenswrapper[4996]: E0228 09:43:31.039888 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:43:44 crc kubenswrapper[4996]: I0228 09:43:44.035718 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:43:44 crc kubenswrapper[4996]: I0228 09:43:44.826087 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"e348f5b02c06338b03f35074df638af610b737fdc8d3323ed8608d12b2b3077c"} Feb 28 09:43:44 crc kubenswrapper[4996]: I0228 09:43:44.854476 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" podStartSLOduration=43.397874424 podStartE2EDuration="43.854453927s" podCreationTimestamp="2026-02-28 09:43:01 +0000 UTC" firstStartedPulling="2026-02-28 09:43:02.642822516 +0000 UTC m=+2546.333625347" lastFinishedPulling="2026-02-28 09:43:03.099402029 +0000 UTC m=+2546.790204850" observedRunningTime="2026-02-28 09:43:03.467947137 +0000 UTC m=+2547.158749988" watchObservedRunningTime="2026-02-28 09:43:44.854453927 +0000 UTC m=+2588.545256748" Feb 28 09:43:59 crc kubenswrapper[4996]: I0228 09:43:59.974836 4996 generic.go:334] "Generic (PLEG): container finished" podID="b17e3d39-7e71-472f-9011-d825c77b005a" containerID="19c11257396e8f4128e1c3c2304006ce180013a0483bda357743bbbb8c950412" exitCode=0 Feb 28 09:43:59 crc kubenswrapper[4996]: I0228 09:43:59.974984 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" event={"ID":"b17e3d39-7e71-472f-9011-d825c77b005a","Type":"ContainerDied","Data":"19c11257396e8f4128e1c3c2304006ce180013a0483bda357743bbbb8c950412"} Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.152386 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537864-zkmxj"] Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.154125 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537864-zkmxj" Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.157529 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.158285 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.163746 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537864-zkmxj"] Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.164943 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.265511 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsfk7\" (UniqueName: \"kubernetes.io/projected/06c958c3-1720-4797-8e3f-edea267640f5-kube-api-access-fsfk7\") pod \"auto-csr-approver-29537864-zkmxj\" (UID: \"06c958c3-1720-4797-8e3f-edea267640f5\") " pod="openshift-infra/auto-csr-approver-29537864-zkmxj" Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.367499 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsfk7\" (UniqueName: \"kubernetes.io/projected/06c958c3-1720-4797-8e3f-edea267640f5-kube-api-access-fsfk7\") pod \"auto-csr-approver-29537864-zkmxj\" (UID: \"06c958c3-1720-4797-8e3f-edea267640f5\") " pod="openshift-infra/auto-csr-approver-29537864-zkmxj" Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.400305 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsfk7\" (UniqueName: \"kubernetes.io/projected/06c958c3-1720-4797-8e3f-edea267640f5-kube-api-access-fsfk7\") pod \"auto-csr-approver-29537864-zkmxj\" (UID: \"06c958c3-1720-4797-8e3f-edea267640f5\") " pod="openshift-infra/auto-csr-approver-29537864-zkmxj" Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.484873 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537864-zkmxj" Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.959729 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537864-zkmxj"] Feb 28 09:44:00 crc kubenswrapper[4996]: I0228 09:44:00.987691 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537864-zkmxj" event={"ID":"06c958c3-1720-4797-8e3f-edea267640f5","Type":"ContainerStarted","Data":"3bd86ea32c6f0d6f5cfb85a3d09a19516fa399d9cc55375b04f24fbc96efc423"} Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.447363 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.489295 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dp2p\" (UniqueName: \"kubernetes.io/projected/b17e3d39-7e71-472f-9011-d825c77b005a-kube-api-access-8dp2p\") pod \"b17e3d39-7e71-472f-9011-d825c77b005a\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.489395 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-inventory\") pod \"b17e3d39-7e71-472f-9011-d825c77b005a\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.489448 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"b17e3d39-7e71-472f-9011-d825c77b005a\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.489508 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-metadata-combined-ca-bundle\") pod \"b17e3d39-7e71-472f-9011-d825c77b005a\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.489625 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ceph\") pod \"b17e3d39-7e71-472f-9011-d825c77b005a\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.489688 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-nova-metadata-neutron-config-0\") pod \"b17e3d39-7e71-472f-9011-d825c77b005a\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.489732 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ssh-key-openstack-edpm-ipam\") pod \"b17e3d39-7e71-472f-9011-d825c77b005a\" (UID: \"b17e3d39-7e71-472f-9011-d825c77b005a\") " Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.496607 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ceph" (OuterVolumeSpecName: "ceph") pod "b17e3d39-7e71-472f-9011-d825c77b005a" (UID: "b17e3d39-7e71-472f-9011-d825c77b005a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.496408 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "b17e3d39-7e71-472f-9011-d825c77b005a" (UID: "b17e3d39-7e71-472f-9011-d825c77b005a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.500306 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17e3d39-7e71-472f-9011-d825c77b005a-kube-api-access-8dp2p" (OuterVolumeSpecName: "kube-api-access-8dp2p") pod "b17e3d39-7e71-472f-9011-d825c77b005a" (UID: "b17e3d39-7e71-472f-9011-d825c77b005a"). InnerVolumeSpecName "kube-api-access-8dp2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.518644 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-inventory" (OuterVolumeSpecName: "inventory") pod "b17e3d39-7e71-472f-9011-d825c77b005a" (UID: "b17e3d39-7e71-472f-9011-d825c77b005a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.527133 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "b17e3d39-7e71-472f-9011-d825c77b005a" (UID: "b17e3d39-7e71-472f-9011-d825c77b005a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.540325 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "b17e3d39-7e71-472f-9011-d825c77b005a" (UID: "b17e3d39-7e71-472f-9011-d825c77b005a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.546863 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b17e3d39-7e71-472f-9011-d825c77b005a" (UID: "b17e3d39-7e71-472f-9011-d825c77b005a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.591999 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.592167 4996 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.592236 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.592297 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dp2p\" (UniqueName: \"kubernetes.io/projected/b17e3d39-7e71-472f-9011-d825c77b005a-kube-api-access-8dp2p\") on node \"crc\" DevicePath \"\"" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.592348 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.592416 4996 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.592500 4996 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b17e3d39-7e71-472f-9011-d825c77b005a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.998449 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" event={"ID":"b17e3d39-7e71-472f-9011-d825c77b005a","Type":"ContainerDied","Data":"d30557d0f09bb5b3b39863d5adcaf13e7a527af75e410c690889a9a50aef9ea7"} Feb 28 09:44:01 crc kubenswrapper[4996]: I0228 09:44:01.998748 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d30557d0f09bb5b3b39863d5adcaf13e7a527af75e410c690889a9a50aef9ea7" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:01.998543 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.081763 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l"] Feb 28 09:44:02 crc kubenswrapper[4996]: E0228 09:44:02.082427 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17e3d39-7e71-472f-9011-d825c77b005a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.082447 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17e3d39-7e71-472f-9011-d825c77b005a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.082618 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17e3d39-7e71-472f-9011-d825c77b005a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.083209 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.085138 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.086585 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.086715 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.087935 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.088276 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.088439 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.094048 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l"] Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.201853 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.201913 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.201996 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgk4w\" (UniqueName: \"kubernetes.io/projected/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-kube-api-access-pgk4w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.202314 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.202524 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.202775 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.304755 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.304872 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.304960 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.305097 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.305140 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.305214 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgk4w\" (UniqueName: \"kubernetes.io/projected/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-kube-api-access-pgk4w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.311549 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.311586 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.312252 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.313806 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.314807 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.331669 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgk4w\" (UniqueName: \"kubernetes.io/projected/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-kube-api-access-pgk4w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.411957 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:44:02 crc kubenswrapper[4996]: W0228 09:44:02.978506 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0393bfd_0a6b_48e8_8ccb_45ec21b73b58.slice/crio-f4384c1485037cf82112ad02a2c367f054cf0721a8a6a01f114784167edd5c5d WatchSource:0}: Error finding container f4384c1485037cf82112ad02a2c367f054cf0721a8a6a01f114784167edd5c5d: Status 404 returned error can't find the container with id f4384c1485037cf82112ad02a2c367f054cf0721a8a6a01f114784167edd5c5d Feb 28 09:44:02 crc kubenswrapper[4996]: I0228 09:44:02.982223 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l"] Feb 28 09:44:03 crc kubenswrapper[4996]: I0228 09:44:03.011482 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" event={"ID":"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58","Type":"ContainerStarted","Data":"f4384c1485037cf82112ad02a2c367f054cf0721a8a6a01f114784167edd5c5d"} Feb 28 09:44:03 crc kubenswrapper[4996]: I0228 09:44:03.014168 4996 generic.go:334] "Generic (PLEG): container finished" podID="06c958c3-1720-4797-8e3f-edea267640f5" containerID="7e8e92a238a9bcda3977578ab2683f3cef35af29e678d515f8f6d299cdd66cb6" exitCode=0 Feb 28 09:44:03 crc kubenswrapper[4996]: I0228 09:44:03.014219 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537864-zkmxj" event={"ID":"06c958c3-1720-4797-8e3f-edea267640f5","Type":"ContainerDied","Data":"7e8e92a238a9bcda3977578ab2683f3cef35af29e678d515f8f6d299cdd66cb6"} Feb 28 09:44:04 crc kubenswrapper[4996]: I0228 09:44:04.024306 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" event={"ID":"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58","Type":"ContainerStarted","Data":"113791c8d915f8baf16479ec2762e637968033a6c4a3f5ad48c93de1fd2d614f"} Feb 28 09:44:04 crc kubenswrapper[4996]: I0228 09:44:04.071608 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" podStartSLOduration=1.689260611 podStartE2EDuration="2.071587078s" podCreationTimestamp="2026-02-28 09:44:02 +0000 UTC" firstStartedPulling="2026-02-28 09:44:02.984950819 +0000 UTC m=+2606.675753640" lastFinishedPulling="2026-02-28 09:44:03.367277286 +0000 UTC m=+2607.058080107" observedRunningTime="2026-02-28 09:44:04.044068652 +0000 UTC m=+2607.734871473" watchObservedRunningTime="2026-02-28 09:44:04.071587078 +0000 UTC m=+2607.762389889" Feb 28 09:44:04 crc kubenswrapper[4996]: I0228 09:44:04.342075 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537864-zkmxj" Feb 28 09:44:04 crc kubenswrapper[4996]: I0228 09:44:04.453696 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsfk7\" (UniqueName: \"kubernetes.io/projected/06c958c3-1720-4797-8e3f-edea267640f5-kube-api-access-fsfk7\") pod \"06c958c3-1720-4797-8e3f-edea267640f5\" (UID: \"06c958c3-1720-4797-8e3f-edea267640f5\") " Feb 28 09:44:04 crc kubenswrapper[4996]: I0228 09:44:04.460789 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c958c3-1720-4797-8e3f-edea267640f5-kube-api-access-fsfk7" (OuterVolumeSpecName: "kube-api-access-fsfk7") pod "06c958c3-1720-4797-8e3f-edea267640f5" (UID: "06c958c3-1720-4797-8e3f-edea267640f5"). InnerVolumeSpecName "kube-api-access-fsfk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:44:04 crc kubenswrapper[4996]: I0228 09:44:04.559622 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsfk7\" (UniqueName: \"kubernetes.io/projected/06c958c3-1720-4797-8e3f-edea267640f5-kube-api-access-fsfk7\") on node \"crc\" DevicePath \"\"" Feb 28 09:44:05 crc kubenswrapper[4996]: I0228 09:44:05.054187 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537864-zkmxj" Feb 28 09:44:05 crc kubenswrapper[4996]: I0228 09:44:05.069749 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537864-zkmxj" event={"ID":"06c958c3-1720-4797-8e3f-edea267640f5","Type":"ContainerDied","Data":"3bd86ea32c6f0d6f5cfb85a3d09a19516fa399d9cc55375b04f24fbc96efc423"} Feb 28 09:44:05 crc kubenswrapper[4996]: I0228 09:44:05.069825 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bd86ea32c6f0d6f5cfb85a3d09a19516fa399d9cc55375b04f24fbc96efc423" Feb 28 09:44:05 crc kubenswrapper[4996]: I0228 09:44:05.421695 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537858-wf7m7"] Feb 28 09:44:05 crc kubenswrapper[4996]: I0228 09:44:05.431428 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537858-wf7m7"] Feb 28 09:44:07 crc kubenswrapper[4996]: I0228 09:44:07.052626 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd6fd48a-ad72-48f5-8bb1-06804de608c2" path="/var/lib/kubelet/pods/dd6fd48a-ad72-48f5-8bb1-06804de608c2/volumes" Feb 28 09:44:45 crc kubenswrapper[4996]: I0228 09:44:45.126422 4996 scope.go:117] "RemoveContainer" containerID="9c63488b0eba73748570b05be2ce120ee56ef56fe9ed55859d8bcd9fa4925677" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.160707 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t"] Feb 28 09:45:00 crc kubenswrapper[4996]: E0228 09:45:00.161593 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c958c3-1720-4797-8e3f-edea267640f5" containerName="oc" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.161606 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c958c3-1720-4797-8e3f-edea267640f5" containerName="oc" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.161801 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c958c3-1720-4797-8e3f-edea267640f5" containerName="oc" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.162445 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.164676 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.164676 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.173839 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t"] Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.297830 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4hv\" (UniqueName: \"kubernetes.io/projected/ffa9b75d-52fd-4522-b097-0c88036f0fa1-kube-api-access-7m4hv\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.297908 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffa9b75d-52fd-4522-b097-0c88036f0fa1-config-volume\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.298261 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffa9b75d-52fd-4522-b097-0c88036f0fa1-secret-volume\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.400598 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m4hv\" (UniqueName: \"kubernetes.io/projected/ffa9b75d-52fd-4522-b097-0c88036f0fa1-kube-api-access-7m4hv\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.400657 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffa9b75d-52fd-4522-b097-0c88036f0fa1-config-volume\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.400776 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffa9b75d-52fd-4522-b097-0c88036f0fa1-secret-volume\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.401945 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffa9b75d-52fd-4522-b097-0c88036f0fa1-config-volume\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.408627 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffa9b75d-52fd-4522-b097-0c88036f0fa1-secret-volume\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.425118 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m4hv\" (UniqueName: \"kubernetes.io/projected/ffa9b75d-52fd-4522-b097-0c88036f0fa1-kube-api-access-7m4hv\") pod \"collect-profiles-29537865-hj59t\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.484814 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:00 crc kubenswrapper[4996]: I0228 09:45:00.931957 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t"] Feb 28 09:45:01 crc kubenswrapper[4996]: I0228 09:45:01.648821 4996 generic.go:334] "Generic (PLEG): container finished" podID="ffa9b75d-52fd-4522-b097-0c88036f0fa1" containerID="4500134914afcc7fb50ece710f28eceddee5d04baa5c4d66318df863e4e8a76c" exitCode=0 Feb 28 09:45:01 crc kubenswrapper[4996]: I0228 09:45:01.648913 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" event={"ID":"ffa9b75d-52fd-4522-b097-0c88036f0fa1","Type":"ContainerDied","Data":"4500134914afcc7fb50ece710f28eceddee5d04baa5c4d66318df863e4e8a76c"} Feb 28 09:45:01 crc kubenswrapper[4996]: I0228 09:45:01.649190 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" event={"ID":"ffa9b75d-52fd-4522-b097-0c88036f0fa1","Type":"ContainerStarted","Data":"bbe23e3d209853a071388cd4651335e2e382995621f63f79381f054c5c8fa916"} Feb 28 09:45:02 crc kubenswrapper[4996]: I0228 09:45:02.967347 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.048986 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffa9b75d-52fd-4522-b097-0c88036f0fa1-config-volume\") pod \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.049112 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m4hv\" (UniqueName: \"kubernetes.io/projected/ffa9b75d-52fd-4522-b097-0c88036f0fa1-kube-api-access-7m4hv\") pod \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.049192 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffa9b75d-52fd-4522-b097-0c88036f0fa1-secret-volume\") pod \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\" (UID: \"ffa9b75d-52fd-4522-b097-0c88036f0fa1\") " Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.049738 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffa9b75d-52fd-4522-b097-0c88036f0fa1-config-volume" (OuterVolumeSpecName: "config-volume") pod "ffa9b75d-52fd-4522-b097-0c88036f0fa1" (UID: "ffa9b75d-52fd-4522-b097-0c88036f0fa1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.055366 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffa9b75d-52fd-4522-b097-0c88036f0fa1-kube-api-access-7m4hv" (OuterVolumeSpecName: "kube-api-access-7m4hv") pod "ffa9b75d-52fd-4522-b097-0c88036f0fa1" (UID: "ffa9b75d-52fd-4522-b097-0c88036f0fa1"). InnerVolumeSpecName "kube-api-access-7m4hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.055967 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffa9b75d-52fd-4522-b097-0c88036f0fa1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ffa9b75d-52fd-4522-b097-0c88036f0fa1" (UID: "ffa9b75d-52fd-4522-b097-0c88036f0fa1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.151674 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m4hv\" (UniqueName: \"kubernetes.io/projected/ffa9b75d-52fd-4522-b097-0c88036f0fa1-kube-api-access-7m4hv\") on node \"crc\" DevicePath \"\"" Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.151729 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ffa9b75d-52fd-4522-b097-0c88036f0fa1-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.151750 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffa9b75d-52fd-4522-b097-0c88036f0fa1-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.675553 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" event={"ID":"ffa9b75d-52fd-4522-b097-0c88036f0fa1","Type":"ContainerDied","Data":"bbe23e3d209853a071388cd4651335e2e382995621f63f79381f054c5c8fa916"} Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.675811 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbe23e3d209853a071388cd4651335e2e382995621f63f79381f054c5c8fa916" Feb 28 09:45:03 crc kubenswrapper[4996]: I0228 09:45:03.675885 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t" Feb 28 09:45:04 crc kubenswrapper[4996]: I0228 09:45:04.070056 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm"] Feb 28 09:45:04 crc kubenswrapper[4996]: I0228 09:45:04.079043 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537820-htwpm"] Feb 28 09:45:05 crc kubenswrapper[4996]: I0228 09:45:05.050154 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcafa6f4-ba55-465f-928c-71b3687abd21" path="/var/lib/kubelet/pods/dcafa6f4-ba55-465f-928c-71b3687abd21/volumes" Feb 28 09:45:45 crc kubenswrapper[4996]: I0228 09:45:45.208542 4996 scope.go:117] "RemoveContainer" containerID="1ab79257b4a147c3dea91ec3a14c5904d9e25d7a5cea7efcb49c99b4c632816c" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.164196 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537866-s6pj7"] Feb 28 09:46:00 crc kubenswrapper[4996]: E0228 09:46:00.165214 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffa9b75d-52fd-4522-b097-0c88036f0fa1" containerName="collect-profiles" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.165320 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffa9b75d-52fd-4522-b097-0c88036f0fa1" containerName="collect-profiles" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.165536 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffa9b75d-52fd-4522-b097-0c88036f0fa1" containerName="collect-profiles" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.166279 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.168882 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.169261 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.169625 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.184122 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537866-s6pj7"] Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.209326 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncv5d\" (UniqueName: \"kubernetes.io/projected/75d27b47-9525-4b0a-b96e-6f4d3d8222ba-kube-api-access-ncv5d\") pod \"auto-csr-approver-29537866-s6pj7\" (UID: \"75d27b47-9525-4b0a-b96e-6f4d3d8222ba\") " pod="openshift-infra/auto-csr-approver-29537866-s6pj7" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.311432 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncv5d\" (UniqueName: \"kubernetes.io/projected/75d27b47-9525-4b0a-b96e-6f4d3d8222ba-kube-api-access-ncv5d\") pod \"auto-csr-approver-29537866-s6pj7\" (UID: \"75d27b47-9525-4b0a-b96e-6f4d3d8222ba\") " pod="openshift-infra/auto-csr-approver-29537866-s6pj7" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.332128 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncv5d\" (UniqueName: \"kubernetes.io/projected/75d27b47-9525-4b0a-b96e-6f4d3d8222ba-kube-api-access-ncv5d\") pod \"auto-csr-approver-29537866-s6pj7\" (UID: \"75d27b47-9525-4b0a-b96e-6f4d3d8222ba\") " pod="openshift-infra/auto-csr-approver-29537866-s6pj7" Feb 28 09:46:00 crc kubenswrapper[4996]: I0228 09:46:00.496297 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" Feb 28 09:46:01 crc kubenswrapper[4996]: I0228 09:46:01.004498 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537866-s6pj7"] Feb 28 09:46:01 crc kubenswrapper[4996]: I0228 09:46:01.268749 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" event={"ID":"75d27b47-9525-4b0a-b96e-6f4d3d8222ba","Type":"ContainerStarted","Data":"9cf3f6d63f46cd0c8067f54dae46c99cfc2f17dc23f46c372392f43a770b4de8"} Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.284163 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" event={"ID":"75d27b47-9525-4b0a-b96e-6f4d3d8222ba","Type":"ContainerStarted","Data":"b7fc4b57190d9b6dfb9d19f93589596b1d466129cff47366a3253d43226b12a6"} Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.314114 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" podStartSLOduration=1.529519256 podStartE2EDuration="2.314085331s" podCreationTimestamp="2026-02-28 09:46:00 +0000 UTC" firstStartedPulling="2026-02-28 09:46:00.995078809 +0000 UTC m=+2724.685881660" lastFinishedPulling="2026-02-28 09:46:01.779644924 +0000 UTC m=+2725.470447735" observedRunningTime="2026-02-28 09:46:02.301298916 +0000 UTC m=+2725.992101757" watchObservedRunningTime="2026-02-28 09:46:02.314085331 +0000 UTC m=+2726.004888172" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.671574 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8pjgl"] Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.673972 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.699727 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pjgl"] Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.765348 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfmj\" (UniqueName: \"kubernetes.io/projected/795a6153-0ea4-4c4b-a897-9a5a54332c3a-kube-api-access-jdfmj\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.765458 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-catalog-content\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.765562 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-utilities\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.867092 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfmj\" (UniqueName: \"kubernetes.io/projected/795a6153-0ea4-4c4b-a897-9a5a54332c3a-kube-api-access-jdfmj\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.867199 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-catalog-content\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.867285 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-utilities\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.867811 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-catalog-content\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.868087 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-utilities\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.902642 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfmj\" (UniqueName: \"kubernetes.io/projected/795a6153-0ea4-4c4b-a897-9a5a54332c3a-kube-api-access-jdfmj\") pod \"certified-operators-8pjgl\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:02 crc kubenswrapper[4996]: I0228 09:46:02.997704 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:03 crc kubenswrapper[4996]: I0228 09:46:03.316186 4996 generic.go:334] "Generic (PLEG): container finished" podID="75d27b47-9525-4b0a-b96e-6f4d3d8222ba" containerID="b7fc4b57190d9b6dfb9d19f93589596b1d466129cff47366a3253d43226b12a6" exitCode=0 Feb 28 09:46:03 crc kubenswrapper[4996]: I0228 09:46:03.316527 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" event={"ID":"75d27b47-9525-4b0a-b96e-6f4d3d8222ba","Type":"ContainerDied","Data":"b7fc4b57190d9b6dfb9d19f93589596b1d466129cff47366a3253d43226b12a6"} Feb 28 09:46:03 crc kubenswrapper[4996]: I0228 09:46:03.495799 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8pjgl"] Feb 28 09:46:03 crc kubenswrapper[4996]: W0228 09:46:03.502452 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod795a6153_0ea4_4c4b_a897_9a5a54332c3a.slice/crio-f6bdd952acf3c66d7c0c7e329467059dcae13977fa44ad7884d904baf8d7d205 WatchSource:0}: Error finding container f6bdd952acf3c66d7c0c7e329467059dcae13977fa44ad7884d904baf8d7d205: Status 404 returned error can't find the container with id f6bdd952acf3c66d7c0c7e329467059dcae13977fa44ad7884d904baf8d7d205 Feb 28 09:46:04 crc kubenswrapper[4996]: I0228 09:46:04.325857 4996 generic.go:334] "Generic (PLEG): container finished" podID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerID="0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98" exitCode=0 Feb 28 09:46:04 crc kubenswrapper[4996]: I0228 09:46:04.325964 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pjgl" event={"ID":"795a6153-0ea4-4c4b-a897-9a5a54332c3a","Type":"ContainerDied","Data":"0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98"} Feb 28 09:46:04 crc kubenswrapper[4996]: I0228 09:46:04.327380 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pjgl" event={"ID":"795a6153-0ea4-4c4b-a897-9a5a54332c3a","Type":"ContainerStarted","Data":"f6bdd952acf3c66d7c0c7e329467059dcae13977fa44ad7884d904baf8d7d205"} Feb 28 09:46:04 crc kubenswrapper[4996]: I0228 09:46:04.734275 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" Feb 28 09:46:04 crc kubenswrapper[4996]: I0228 09:46:04.903214 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncv5d\" (UniqueName: \"kubernetes.io/projected/75d27b47-9525-4b0a-b96e-6f4d3d8222ba-kube-api-access-ncv5d\") pod \"75d27b47-9525-4b0a-b96e-6f4d3d8222ba\" (UID: \"75d27b47-9525-4b0a-b96e-6f4d3d8222ba\") " Feb 28 09:46:04 crc kubenswrapper[4996]: I0228 09:46:04.912273 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d27b47-9525-4b0a-b96e-6f4d3d8222ba-kube-api-access-ncv5d" (OuterVolumeSpecName: "kube-api-access-ncv5d") pod "75d27b47-9525-4b0a-b96e-6f4d3d8222ba" (UID: "75d27b47-9525-4b0a-b96e-6f4d3d8222ba"). InnerVolumeSpecName "kube-api-access-ncv5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:46:05 crc kubenswrapper[4996]: I0228 09:46:05.006582 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncv5d\" (UniqueName: \"kubernetes.io/projected/75d27b47-9525-4b0a-b96e-6f4d3d8222ba-kube-api-access-ncv5d\") on node \"crc\" DevicePath \"\"" Feb 28 09:46:05 crc kubenswrapper[4996]: I0228 09:46:05.343711 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" Feb 28 09:46:05 crc kubenswrapper[4996]: I0228 09:46:05.343708 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537866-s6pj7" event={"ID":"75d27b47-9525-4b0a-b96e-6f4d3d8222ba","Type":"ContainerDied","Data":"9cf3f6d63f46cd0c8067f54dae46c99cfc2f17dc23f46c372392f43a770b4de8"} Feb 28 09:46:05 crc kubenswrapper[4996]: I0228 09:46:05.343791 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf3f6d63f46cd0c8067f54dae46c99cfc2f17dc23f46c372392f43a770b4de8" Feb 28 09:46:05 crc kubenswrapper[4996]: I0228 09:46:05.349521 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pjgl" event={"ID":"795a6153-0ea4-4c4b-a897-9a5a54332c3a","Type":"ContainerStarted","Data":"ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b"} Feb 28 09:46:05 crc kubenswrapper[4996]: I0228 09:46:05.391474 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537860-5rcwq"] Feb 28 09:46:05 crc kubenswrapper[4996]: I0228 09:46:05.407273 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537860-5rcwq"] Feb 28 09:46:06 crc kubenswrapper[4996]: I0228 09:46:06.360960 4996 generic.go:334] "Generic (PLEG): container finished" podID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerID="ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b" exitCode=0 Feb 28 09:46:06 crc kubenswrapper[4996]: I0228 09:46:06.361065 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pjgl" event={"ID":"795a6153-0ea4-4c4b-a897-9a5a54332c3a","Type":"ContainerDied","Data":"ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b"} Feb 28 09:46:07 crc kubenswrapper[4996]: I0228 09:46:07.043663 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c05758a8-a3e2-43c8-80b3-1cf42027c11e" path="/var/lib/kubelet/pods/c05758a8-a3e2-43c8-80b3-1cf42027c11e/volumes" Feb 28 09:46:07 crc kubenswrapper[4996]: I0228 09:46:07.378146 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pjgl" event={"ID":"795a6153-0ea4-4c4b-a897-9a5a54332c3a","Type":"ContainerStarted","Data":"713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110"} Feb 28 09:46:07 crc kubenswrapper[4996]: I0228 09:46:07.404770 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8pjgl" podStartSLOduration=2.966298142 podStartE2EDuration="5.404743797s" podCreationTimestamp="2026-02-28 09:46:02 +0000 UTC" firstStartedPulling="2026-02-28 09:46:04.328067854 +0000 UTC m=+2728.018870655" lastFinishedPulling="2026-02-28 09:46:06.766513489 +0000 UTC m=+2730.457316310" observedRunningTime="2026-02-28 09:46:07.400664466 +0000 UTC m=+2731.091467297" watchObservedRunningTime="2026-02-28 09:46:07.404743797 +0000 UTC m=+2731.095546618" Feb 28 09:46:12 crc kubenswrapper[4996]: I0228 09:46:12.248949 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:46:12 crc kubenswrapper[4996]: I0228 09:46:12.249469 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:46:12 crc kubenswrapper[4996]: I0228 09:46:12.998609 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:12 crc kubenswrapper[4996]: I0228 09:46:12.998665 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:13 crc kubenswrapper[4996]: I0228 09:46:13.047384 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:13 crc kubenswrapper[4996]: I0228 09:46:13.482059 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:13 crc kubenswrapper[4996]: I0228 09:46:13.530440 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pjgl"] Feb 28 09:46:15 crc kubenswrapper[4996]: I0228 09:46:15.456778 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8pjgl" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerName="registry-server" containerID="cri-o://713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110" gracePeriod=2 Feb 28 09:46:15 crc kubenswrapper[4996]: I0228 09:46:15.957334 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.151176 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-utilities\") pod \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.151407 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdfmj\" (UniqueName: \"kubernetes.io/projected/795a6153-0ea4-4c4b-a897-9a5a54332c3a-kube-api-access-jdfmj\") pod \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.151482 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-catalog-content\") pod \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\" (UID: \"795a6153-0ea4-4c4b-a897-9a5a54332c3a\") " Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.152422 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-utilities" (OuterVolumeSpecName: "utilities") pod "795a6153-0ea4-4c4b-a897-9a5a54332c3a" (UID: "795a6153-0ea4-4c4b-a897-9a5a54332c3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.156841 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795a6153-0ea4-4c4b-a897-9a5a54332c3a-kube-api-access-jdfmj" (OuterVolumeSpecName: "kube-api-access-jdfmj") pod "795a6153-0ea4-4c4b-a897-9a5a54332c3a" (UID: "795a6153-0ea4-4c4b-a897-9a5a54332c3a"). InnerVolumeSpecName "kube-api-access-jdfmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.223904 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "795a6153-0ea4-4c4b-a897-9a5a54332c3a" (UID: "795a6153-0ea4-4c4b-a897-9a5a54332c3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.253055 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.253095 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdfmj\" (UniqueName: \"kubernetes.io/projected/795a6153-0ea4-4c4b-a897-9a5a54332c3a-kube-api-access-jdfmj\") on node \"crc\" DevicePath \"\"" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.253111 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795a6153-0ea4-4c4b-a897-9a5a54332c3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.469263 4996 generic.go:334] "Generic (PLEG): container finished" podID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerID="713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110" exitCode=0 Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.469312 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pjgl" event={"ID":"795a6153-0ea4-4c4b-a897-9a5a54332c3a","Type":"ContainerDied","Data":"713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110"} Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.469346 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8pjgl" event={"ID":"795a6153-0ea4-4c4b-a897-9a5a54332c3a","Type":"ContainerDied","Data":"f6bdd952acf3c66d7c0c7e329467059dcae13977fa44ad7884d904baf8d7d205"} Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.469370 4996 scope.go:117] "RemoveContainer" containerID="713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.469432 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8pjgl" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.492938 4996 scope.go:117] "RemoveContainer" containerID="ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.546368 4996 scope.go:117] "RemoveContainer" containerID="0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.550163 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8pjgl"] Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.588764 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8pjgl"] Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.631482 4996 scope.go:117] "RemoveContainer" containerID="713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110" Feb 28 09:46:16 crc kubenswrapper[4996]: E0228 09:46:16.636226 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110\": container with ID starting with 713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110 not found: ID does not exist" containerID="713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.636273 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110"} err="failed to get container status \"713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110\": rpc error: code = NotFound desc = could not find container \"713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110\": container with ID starting with 713bda8a96b0c07893912db3b0c81c795d87781f1ca021814dfe111d2a3f2110 not found: ID does not exist" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.636297 4996 scope.go:117] "RemoveContainer" containerID="ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b" Feb 28 09:46:16 crc kubenswrapper[4996]: E0228 09:46:16.641293 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b\": container with ID starting with ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b not found: ID does not exist" containerID="ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.641340 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b"} err="failed to get container status \"ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b\": rpc error: code = NotFound desc = could not find container \"ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b\": container with ID starting with ebc4dd6f9b2b83f1c674b502151a1e937439f2251eba670add0ff71669a5a77b not found: ID does not exist" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.641364 4996 scope.go:117] "RemoveContainer" containerID="0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98" Feb 28 09:46:16 crc kubenswrapper[4996]: E0228 09:46:16.641786 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98\": container with ID starting with 0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98 not found: ID does not exist" containerID="0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98" Feb 28 09:46:16 crc kubenswrapper[4996]: I0228 09:46:16.641813 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98"} err="failed to get container status \"0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98\": rpc error: code = NotFound desc = could not find container \"0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98\": container with ID starting with 0067c9099a6bca91c096620d0ca2e46eb5f0f1856244b5a6a2a802516b5d6e98 not found: ID does not exist" Feb 28 09:46:17 crc kubenswrapper[4996]: I0228 09:46:17.043806 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" path="/var/lib/kubelet/pods/795a6153-0ea4-4c4b-a897-9a5a54332c3a/volumes" Feb 28 09:46:42 crc kubenswrapper[4996]: I0228 09:46:42.248637 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:46:42 crc kubenswrapper[4996]: I0228 09:46:42.249459 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:46:45 crc kubenswrapper[4996]: I0228 09:46:45.302811 4996 scope.go:117] "RemoveContainer" containerID="db1540672614519805cdb32450b43e178c08c6cc4ed200117c6eaa189763358e" Feb 28 09:47:12 crc kubenswrapper[4996]: I0228 09:47:12.249273 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:47:12 crc kubenswrapper[4996]: I0228 09:47:12.249846 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:47:12 crc kubenswrapper[4996]: I0228 09:47:12.249903 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:47:12 crc kubenswrapper[4996]: I0228 09:47:12.250638 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e348f5b02c06338b03f35074df638af610b737fdc8d3323ed8608d12b2b3077c"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:47:12 crc kubenswrapper[4996]: I0228 09:47:12.250712 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://e348f5b02c06338b03f35074df638af610b737fdc8d3323ed8608d12b2b3077c" gracePeriod=600 Feb 28 09:47:13 crc kubenswrapper[4996]: I0228 09:47:13.028378 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="e348f5b02c06338b03f35074df638af610b737fdc8d3323ed8608d12b2b3077c" exitCode=0 Feb 28 09:47:13 crc kubenswrapper[4996]: I0228 09:47:13.029341 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"e348f5b02c06338b03f35074df638af610b737fdc8d3323ed8608d12b2b3077c"} Feb 28 09:47:13 crc kubenswrapper[4996]: I0228 09:47:13.029482 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50"} Feb 28 09:47:13 crc kubenswrapper[4996]: I0228 09:47:13.029647 4996 scope.go:117] "RemoveContainer" containerID="1d48ccb9aca734ff233bb14f709b5cd62be1f9b6fecb4d6f87449e9ec209cbee" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.145048 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537868-rfx5n"] Feb 28 09:48:00 crc kubenswrapper[4996]: E0228 09:48:00.146605 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerName="registry-server" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.146639 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerName="registry-server" Feb 28 09:48:00 crc kubenswrapper[4996]: E0228 09:48:00.146675 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d27b47-9525-4b0a-b96e-6f4d3d8222ba" containerName="oc" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.146691 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d27b47-9525-4b0a-b96e-6f4d3d8222ba" containerName="oc" Feb 28 09:48:00 crc kubenswrapper[4996]: E0228 09:48:00.146719 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerName="extract-utilities" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.146735 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerName="extract-utilities" Feb 28 09:48:00 crc kubenswrapper[4996]: E0228 09:48:00.146774 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerName="extract-content" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.146790 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerName="extract-content" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.147222 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d27b47-9525-4b0a-b96e-6f4d3d8222ba" containerName="oc" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.147268 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="795a6153-0ea4-4c4b-a897-9a5a54332c3a" containerName="registry-server" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.148409 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537868-rfx5n" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.150275 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.151825 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537868-rfx5n"] Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.152186 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.153277 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.253530 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68p9l\" (UniqueName: \"kubernetes.io/projected/33251059-ef77-4e71-9779-408745e6ac20-kube-api-access-68p9l\") pod \"auto-csr-approver-29537868-rfx5n\" (UID: \"33251059-ef77-4e71-9779-408745e6ac20\") " pod="openshift-infra/auto-csr-approver-29537868-rfx5n" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.354979 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68p9l\" (UniqueName: \"kubernetes.io/projected/33251059-ef77-4e71-9779-408745e6ac20-kube-api-access-68p9l\") pod \"auto-csr-approver-29537868-rfx5n\" (UID: \"33251059-ef77-4e71-9779-408745e6ac20\") " pod="openshift-infra/auto-csr-approver-29537868-rfx5n" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.382881 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68p9l\" (UniqueName: \"kubernetes.io/projected/33251059-ef77-4e71-9779-408745e6ac20-kube-api-access-68p9l\") pod \"auto-csr-approver-29537868-rfx5n\" (UID: \"33251059-ef77-4e71-9779-408745e6ac20\") " pod="openshift-infra/auto-csr-approver-29537868-rfx5n" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.470870 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537868-rfx5n" Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.916262 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537868-rfx5n"] Feb 28 09:48:00 crc kubenswrapper[4996]: I0228 09:48:00.936398 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:48:01 crc kubenswrapper[4996]: I0228 09:48:01.511118 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537868-rfx5n" event={"ID":"33251059-ef77-4e71-9779-408745e6ac20","Type":"ContainerStarted","Data":"8abd2094fe1b8ce1a229e3e90e7de6f8d3d2a06e33637695d250d7127da30b7b"} Feb 28 09:48:02 crc kubenswrapper[4996]: I0228 09:48:02.523751 4996 generic.go:334] "Generic (PLEG): container finished" podID="33251059-ef77-4e71-9779-408745e6ac20" containerID="64cb7c11c61ec81c4797f5682c3ac5d693e2e379010bf65696e52b081ff26061" exitCode=0 Feb 28 09:48:02 crc kubenswrapper[4996]: I0228 09:48:02.523831 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537868-rfx5n" event={"ID":"33251059-ef77-4e71-9779-408745e6ac20","Type":"ContainerDied","Data":"64cb7c11c61ec81c4797f5682c3ac5d693e2e379010bf65696e52b081ff26061"} Feb 28 09:48:03 crc kubenswrapper[4996]: I0228 09:48:03.889628 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537868-rfx5n" Feb 28 09:48:03 crc kubenswrapper[4996]: I0228 09:48:03.934780 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68p9l\" (UniqueName: \"kubernetes.io/projected/33251059-ef77-4e71-9779-408745e6ac20-kube-api-access-68p9l\") pod \"33251059-ef77-4e71-9779-408745e6ac20\" (UID: \"33251059-ef77-4e71-9779-408745e6ac20\") " Feb 28 09:48:03 crc kubenswrapper[4996]: I0228 09:48:03.943230 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33251059-ef77-4e71-9779-408745e6ac20-kube-api-access-68p9l" (OuterVolumeSpecName: "kube-api-access-68p9l") pod "33251059-ef77-4e71-9779-408745e6ac20" (UID: "33251059-ef77-4e71-9779-408745e6ac20"). InnerVolumeSpecName "kube-api-access-68p9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:48:04 crc kubenswrapper[4996]: I0228 09:48:04.036813 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68p9l\" (UniqueName: \"kubernetes.io/projected/33251059-ef77-4e71-9779-408745e6ac20-kube-api-access-68p9l\") on node \"crc\" DevicePath \"\"" Feb 28 09:48:04 crc kubenswrapper[4996]: I0228 09:48:04.542478 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537868-rfx5n" event={"ID":"33251059-ef77-4e71-9779-408745e6ac20","Type":"ContainerDied","Data":"8abd2094fe1b8ce1a229e3e90e7de6f8d3d2a06e33637695d250d7127da30b7b"} Feb 28 09:48:04 crc kubenswrapper[4996]: I0228 09:48:04.542533 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8abd2094fe1b8ce1a229e3e90e7de6f8d3d2a06e33637695d250d7127da30b7b" Feb 28 09:48:04 crc kubenswrapper[4996]: I0228 09:48:04.542601 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537868-rfx5n" Feb 28 09:48:04 crc kubenswrapper[4996]: I0228 09:48:04.986385 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537862-4jzq4"] Feb 28 09:48:05 crc kubenswrapper[4996]: I0228 09:48:05.003867 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537862-4jzq4"] Feb 28 09:48:05 crc kubenswrapper[4996]: I0228 09:48:05.051531 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d427abd7-9d44-47d7-98dd-b77e6daec678" path="/var/lib/kubelet/pods/d427abd7-9d44-47d7-98dd-b77e6daec678/volumes" Feb 28 09:48:07 crc kubenswrapper[4996]: I0228 09:48:07.579285 4996 generic.go:334] "Generic (PLEG): container finished" podID="f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" containerID="113791c8d915f8baf16479ec2762e637968033a6c4a3f5ad48c93de1fd2d614f" exitCode=0 Feb 28 09:48:07 crc kubenswrapper[4996]: I0228 09:48:07.579408 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" event={"ID":"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58","Type":"ContainerDied","Data":"113791c8d915f8baf16479ec2762e637968033a6c4a3f5ad48c93de1fd2d614f"} Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.017368 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.143112 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ssh-key-openstack-edpm-ipam\") pod \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.143248 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-combined-ca-bundle\") pod \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.143273 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ceph\") pod \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.143357 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-inventory\") pod \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.143393 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgk4w\" (UniqueName: \"kubernetes.io/projected/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-kube-api-access-pgk4w\") pod \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.143439 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-secret-0\") pod \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\" (UID: \"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58\") " Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.151626 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ceph" (OuterVolumeSpecName: "ceph") pod "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" (UID: "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.151630 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-kube-api-access-pgk4w" (OuterVolumeSpecName: "kube-api-access-pgk4w") pod "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" (UID: "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58"). InnerVolumeSpecName "kube-api-access-pgk4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.154202 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" (UID: "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.170621 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" (UID: "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.176611 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" (UID: "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.177186 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-inventory" (OuterVolumeSpecName: "inventory") pod "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" (UID: "f0393bfd-0a6b-48e8-8ccb-45ec21b73b58"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.245363 4996 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.245402 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.245418 4996 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.245429 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.245441 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.245455 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgk4w\" (UniqueName: \"kubernetes.io/projected/f0393bfd-0a6b-48e8-8ccb-45ec21b73b58-kube-api-access-pgk4w\") on node \"crc\" DevicePath \"\"" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.599867 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" event={"ID":"f0393bfd-0a6b-48e8-8ccb-45ec21b73b58","Type":"ContainerDied","Data":"f4384c1485037cf82112ad02a2c367f054cf0721a8a6a01f114784167edd5c5d"} Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.600140 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4384c1485037cf82112ad02a2c367f054cf0721a8a6a01f114784167edd5c5d" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.600063 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.699369 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm"] Feb 28 09:48:09 crc kubenswrapper[4996]: E0228 09:48:09.699802 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.699824 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 09:48:09 crc kubenswrapper[4996]: E0228 09:48:09.699843 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33251059-ef77-4e71-9779-408745e6ac20" containerName="oc" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.699851 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="33251059-ef77-4e71-9779-408745e6ac20" containerName="oc" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.700089 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0393bfd-0a6b-48e8-8ccb-45ec21b73b58" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.700113 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="33251059-ef77-4e71-9779-408745e6ac20" containerName="oc" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.700850 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.705547 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.705725 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.705839 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.706022 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.706192 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-b4rdm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.706455 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.706586 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.706592 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.712398 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm"] Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.713857 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753168 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753257 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753285 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64bdz\" (UniqueName: \"kubernetes.io/projected/1d56c0f7-03f9-4035-b2d2-ef6d77821940-kube-api-access-64bdz\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753304 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753329 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753353 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753404 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753436 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753456 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753484 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753529 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753579 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.753602 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.855893 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.855964 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64bdz\" (UniqueName: \"kubernetes.io/projected/1d56c0f7-03f9-4035-b2d2-ef6d77821940-kube-api-access-64bdz\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856049 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856101 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856146 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856198 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856266 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856307 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856357 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856403 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856506 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856558 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.856597 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.857093 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.857994 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.860384 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.860827 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.860897 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.862098 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.862709 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.863348 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.863371 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.864390 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.870472 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.871258 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:09 crc kubenswrapper[4996]: I0228 09:48:09.873606 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64bdz\" (UniqueName: \"kubernetes.io/projected/1d56c0f7-03f9-4035-b2d2-ef6d77821940-kube-api-access-64bdz\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:10 crc kubenswrapper[4996]: I0228 09:48:10.018826 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:48:10 crc kubenswrapper[4996]: I0228 09:48:10.554874 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm"] Feb 28 09:48:10 crc kubenswrapper[4996]: I0228 09:48:10.612319 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" event={"ID":"1d56c0f7-03f9-4035-b2d2-ef6d77821940","Type":"ContainerStarted","Data":"a1e820e6bb0931084c2c496d3fde1b40b3a190c6a050e23ea28a1b1b38307bd0"} Feb 28 09:48:11 crc kubenswrapper[4996]: I0228 09:48:11.620843 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" event={"ID":"1d56c0f7-03f9-4035-b2d2-ef6d77821940","Type":"ContainerStarted","Data":"fbf05b8bf0deca478ce80f9fe16f21bea65e33002c26776a548874b97ed3bf6c"} Feb 28 09:48:11 crc kubenswrapper[4996]: I0228 09:48:11.640297 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" podStartSLOduration=2.206622834 podStartE2EDuration="2.640275868s" podCreationTimestamp="2026-02-28 09:48:09 +0000 UTC" firstStartedPulling="2026-02-28 09:48:10.558419649 +0000 UTC m=+2854.249222460" lastFinishedPulling="2026-02-28 09:48:10.992072673 +0000 UTC m=+2854.682875494" observedRunningTime="2026-02-28 09:48:11.63789934 +0000 UTC m=+2855.328702151" watchObservedRunningTime="2026-02-28 09:48:11.640275868 +0000 UTC m=+2855.331078689" Feb 28 09:48:45 crc kubenswrapper[4996]: I0228 09:48:45.416717 4996 scope.go:117] "RemoveContainer" containerID="7ec067d8beb7f46c57ad8e14b95cd1df6c1ab87674d6105ce94017b59fb99d74" Feb 28 09:49:12 crc kubenswrapper[4996]: I0228 09:49:12.248713 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:49:12 crc kubenswrapper[4996]: I0228 09:49:12.249284 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:49:15 crc kubenswrapper[4996]: I0228 09:49:15.937118 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x2g5z"] Feb 28 09:49:15 crc kubenswrapper[4996]: I0228 09:49:15.940962 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:15 crc kubenswrapper[4996]: I0228 09:49:15.949867 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-catalog-content\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:15 crc kubenswrapper[4996]: I0228 09:49:15.950064 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-utilities\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:15 crc kubenswrapper[4996]: I0228 09:49:15.950110 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md677\" (UniqueName: \"kubernetes.io/projected/7348f568-f0cd-463d-a688-5b6042466073-kube-api-access-md677\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:15 crc kubenswrapper[4996]: I0228 09:49:15.955465 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2g5z"] Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.052185 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-catalog-content\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.052660 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-catalog-content\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.053102 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-utilities\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.053184 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-utilities\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.053216 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md677\" (UniqueName: \"kubernetes.io/projected/7348f568-f0cd-463d-a688-5b6042466073-kube-api-access-md677\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.077108 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md677\" (UniqueName: \"kubernetes.io/projected/7348f568-f0cd-463d-a688-5b6042466073-kube-api-access-md677\") pod \"redhat-marketplace-x2g5z\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.290139 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.759304 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2g5z"] Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.917885 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k6jfm"] Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.921257 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.932340 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k6jfm"] Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.968001 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfnmg\" (UniqueName: \"kubernetes.io/projected/bab35c20-1bc4-48d9-9560-d3515b94b955-kube-api-access-rfnmg\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.969046 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-utilities\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:16 crc kubenswrapper[4996]: I0228 09:49:16.969168 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-catalog-content\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.070727 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfnmg\" (UniqueName: \"kubernetes.io/projected/bab35c20-1bc4-48d9-9560-d3515b94b955-kube-api-access-rfnmg\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.072138 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-utilities\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.072260 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-catalog-content\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.072817 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-catalog-content\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.081115 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-utilities\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.100961 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfnmg\" (UniqueName: \"kubernetes.io/projected/bab35c20-1bc4-48d9-9560-d3515b94b955-kube-api-access-rfnmg\") pod \"redhat-operators-k6jfm\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.252756 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.307208 4996 generic.go:334] "Generic (PLEG): container finished" podID="7348f568-f0cd-463d-a688-5b6042466073" containerID="ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b" exitCode=0 Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.307252 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2g5z" event={"ID":"7348f568-f0cd-463d-a688-5b6042466073","Type":"ContainerDied","Data":"ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b"} Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.307306 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2g5z" event={"ID":"7348f568-f0cd-463d-a688-5b6042466073","Type":"ContainerStarted","Data":"b1c07db1333f1849a851d3f1415729984aaab886680e0363833cbfca36d0a2ff"} Feb 28 09:49:17 crc kubenswrapper[4996]: I0228 09:49:17.681835 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k6jfm"] Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.319658 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2hnvt"] Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.322418 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.326302 4996 generic.go:334] "Generic (PLEG): container finished" podID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerID="4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa" exitCode=0 Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.326456 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6jfm" event={"ID":"bab35c20-1bc4-48d9-9560-d3515b94b955","Type":"ContainerDied","Data":"4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa"} Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.326556 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6jfm" event={"ID":"bab35c20-1bc4-48d9-9560-d3515b94b955","Type":"ContainerStarted","Data":"1e81e68ebeef83f1a14e204a34461fc4cb0f3e7c3dda7e9adb09bbb1a093c8a6"} Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.329352 4996 generic.go:334] "Generic (PLEG): container finished" podID="7348f568-f0cd-463d-a688-5b6042466073" containerID="86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a" exitCode=0 Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.329398 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2g5z" event={"ID":"7348f568-f0cd-463d-a688-5b6042466073","Type":"ContainerDied","Data":"86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a"} Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.348561 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hnvt"] Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.395287 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-utilities\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.395416 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-catalog-content\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.395453 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj5dw\" (UniqueName: \"kubernetes.io/projected/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-kube-api-access-nj5dw\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.497667 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-utilities\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.497718 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-catalog-content\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.497738 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj5dw\" (UniqueName: \"kubernetes.io/projected/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-kube-api-access-nj5dw\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.498229 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-utilities\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.498349 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-catalog-content\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.517724 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj5dw\" (UniqueName: \"kubernetes.io/projected/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-kube-api-access-nj5dw\") pod \"community-operators-2hnvt\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:18 crc kubenswrapper[4996]: I0228 09:49:18.656572 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:19 crc kubenswrapper[4996]: I0228 09:49:19.186179 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hnvt"] Feb 28 09:49:19 crc kubenswrapper[4996]: I0228 09:49:19.340521 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2g5z" event={"ID":"7348f568-f0cd-463d-a688-5b6042466073","Type":"ContainerStarted","Data":"5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc"} Feb 28 09:49:19 crc kubenswrapper[4996]: I0228 09:49:19.343084 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hnvt" event={"ID":"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58","Type":"ContainerStarted","Data":"29473a2d123dfd203353182946480ffea24f564ac50b5b30ba4d7a457e33a371"} Feb 28 09:49:19 crc kubenswrapper[4996]: I0228 09:49:19.345152 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6jfm" event={"ID":"bab35c20-1bc4-48d9-9560-d3515b94b955","Type":"ContainerStarted","Data":"a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6"} Feb 28 09:49:19 crc kubenswrapper[4996]: I0228 09:49:19.369579 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x2g5z" podStartSLOduration=2.885293676 podStartE2EDuration="4.369560847s" podCreationTimestamp="2026-02-28 09:49:15 +0000 UTC" firstStartedPulling="2026-02-28 09:49:17.310343854 +0000 UTC m=+2921.001146665" lastFinishedPulling="2026-02-28 09:49:18.794611025 +0000 UTC m=+2922.485413836" observedRunningTime="2026-02-28 09:49:19.360513727 +0000 UTC m=+2923.051316548" watchObservedRunningTime="2026-02-28 09:49:19.369560847 +0000 UTC m=+2923.060363658" Feb 28 09:49:20 crc kubenswrapper[4996]: I0228 09:49:20.360621 4996 generic.go:334] "Generic (PLEG): container finished" podID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerID="fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083" exitCode=0 Feb 28 09:49:20 crc kubenswrapper[4996]: I0228 09:49:20.360752 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hnvt" event={"ID":"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58","Type":"ContainerDied","Data":"fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083"} Feb 28 09:49:22 crc kubenswrapper[4996]: I0228 09:49:22.388370 4996 generic.go:334] "Generic (PLEG): container finished" podID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerID="a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6" exitCode=0 Feb 28 09:49:22 crc kubenswrapper[4996]: I0228 09:49:22.388450 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6jfm" event={"ID":"bab35c20-1bc4-48d9-9560-d3515b94b955","Type":"ContainerDied","Data":"a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6"} Feb 28 09:49:23 crc kubenswrapper[4996]: I0228 09:49:23.399878 4996 generic.go:334] "Generic (PLEG): container finished" podID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerID="4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c" exitCode=0 Feb 28 09:49:23 crc kubenswrapper[4996]: I0228 09:49:23.399992 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hnvt" event={"ID":"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58","Type":"ContainerDied","Data":"4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c"} Feb 28 09:49:23 crc kubenswrapper[4996]: I0228 09:49:23.403516 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6jfm" event={"ID":"bab35c20-1bc4-48d9-9560-d3515b94b955","Type":"ContainerStarted","Data":"3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73"} Feb 28 09:49:23 crc kubenswrapper[4996]: I0228 09:49:23.442381 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k6jfm" podStartSLOduration=2.949428293 podStartE2EDuration="7.442362513s" podCreationTimestamp="2026-02-28 09:49:16 +0000 UTC" firstStartedPulling="2026-02-28 09:49:18.32840284 +0000 UTC m=+2922.019205671" lastFinishedPulling="2026-02-28 09:49:22.82133708 +0000 UTC m=+2926.512139891" observedRunningTime="2026-02-28 09:49:23.442153368 +0000 UTC m=+2927.132956179" watchObservedRunningTime="2026-02-28 09:49:23.442362513 +0000 UTC m=+2927.133165324" Feb 28 09:49:24 crc kubenswrapper[4996]: I0228 09:49:24.416376 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hnvt" event={"ID":"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58","Type":"ContainerStarted","Data":"73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1"} Feb 28 09:49:24 crc kubenswrapper[4996]: I0228 09:49:24.437722 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2hnvt" podStartSLOduration=3.025206039 podStartE2EDuration="6.437695445s" podCreationTimestamp="2026-02-28 09:49:18 +0000 UTC" firstStartedPulling="2026-02-28 09:49:20.362683056 +0000 UTC m=+2924.053485867" lastFinishedPulling="2026-02-28 09:49:23.775172462 +0000 UTC m=+2927.465975273" observedRunningTime="2026-02-28 09:49:24.437209824 +0000 UTC m=+2928.128012645" watchObservedRunningTime="2026-02-28 09:49:24.437695445 +0000 UTC m=+2928.128498276" Feb 28 09:49:26 crc kubenswrapper[4996]: I0228 09:49:26.291269 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:26 crc kubenswrapper[4996]: I0228 09:49:26.291672 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:26 crc kubenswrapper[4996]: I0228 09:49:26.350033 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:26 crc kubenswrapper[4996]: I0228 09:49:26.479950 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:27 crc kubenswrapper[4996]: I0228 09:49:27.253976 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:27 crc kubenswrapper[4996]: I0228 09:49:27.254059 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:28 crc kubenswrapper[4996]: I0228 09:49:28.326853 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k6jfm" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="registry-server" probeResult="failure" output=< Feb 28 09:49:28 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 09:49:28 crc kubenswrapper[4996]: > Feb 28 09:49:28 crc kubenswrapper[4996]: I0228 09:49:28.657789 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:28 crc kubenswrapper[4996]: I0228 09:49:28.657841 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:28 crc kubenswrapper[4996]: I0228 09:49:28.715112 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:28 crc kubenswrapper[4996]: I0228 09:49:28.919500 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2g5z"] Feb 28 09:49:28 crc kubenswrapper[4996]: I0228 09:49:28.920431 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x2g5z" podUID="7348f568-f0cd-463d-a688-5b6042466073" containerName="registry-server" containerID="cri-o://5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc" gracePeriod=2 Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.359910 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.462721 4996 generic.go:334] "Generic (PLEG): container finished" podID="7348f568-f0cd-463d-a688-5b6042466073" containerID="5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc" exitCode=0 Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.463783 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x2g5z" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.465206 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2g5z" event={"ID":"7348f568-f0cd-463d-a688-5b6042466073","Type":"ContainerDied","Data":"5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc"} Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.465294 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x2g5z" event={"ID":"7348f568-f0cd-463d-a688-5b6042466073","Type":"ContainerDied","Data":"b1c07db1333f1849a851d3f1415729984aaab886680e0363833cbfca36d0a2ff"} Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.465383 4996 scope.go:117] "RemoveContainer" containerID="5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.492294 4996 scope.go:117] "RemoveContainer" containerID="86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.522118 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.530222 4996 scope.go:117] "RemoveContainer" containerID="ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.537456 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-catalog-content\") pod \"7348f568-f0cd-463d-a688-5b6042466073\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.537809 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-utilities\") pod \"7348f568-f0cd-463d-a688-5b6042466073\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.537842 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md677\" (UniqueName: \"kubernetes.io/projected/7348f568-f0cd-463d-a688-5b6042466073-kube-api-access-md677\") pod \"7348f568-f0cd-463d-a688-5b6042466073\" (UID: \"7348f568-f0cd-463d-a688-5b6042466073\") " Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.539620 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-utilities" (OuterVolumeSpecName: "utilities") pod "7348f568-f0cd-463d-a688-5b6042466073" (UID: "7348f568-f0cd-463d-a688-5b6042466073"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.545603 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7348f568-f0cd-463d-a688-5b6042466073-kube-api-access-md677" (OuterVolumeSpecName: "kube-api-access-md677") pod "7348f568-f0cd-463d-a688-5b6042466073" (UID: "7348f568-f0cd-463d-a688-5b6042466073"). InnerVolumeSpecName "kube-api-access-md677". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.564569 4996 scope.go:117] "RemoveContainer" containerID="5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc" Feb 28 09:49:29 crc kubenswrapper[4996]: E0228 09:49:29.566135 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc\": container with ID starting with 5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc not found: ID does not exist" containerID="5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.566359 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc"} err="failed to get container status \"5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc\": rpc error: code = NotFound desc = could not find container \"5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc\": container with ID starting with 5871fe96c72ad4f93b2ed2650760cbbacfb0c6e22b66ca798441afb7f4a392dc not found: ID does not exist" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.566443 4996 scope.go:117] "RemoveContainer" containerID="86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a" Feb 28 09:49:29 crc kubenswrapper[4996]: E0228 09:49:29.566958 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a\": container with ID starting with 86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a not found: ID does not exist" containerID="86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.567038 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a"} err="failed to get container status \"86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a\": rpc error: code = NotFound desc = could not find container \"86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a\": container with ID starting with 86f1459d3aac9e54f5c5d9727948b74c83d371a30777d897912f7b565603e69a not found: ID does not exist" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.567068 4996 scope.go:117] "RemoveContainer" containerID="ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b" Feb 28 09:49:29 crc kubenswrapper[4996]: E0228 09:49:29.568881 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b\": container with ID starting with ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b not found: ID does not exist" containerID="ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.568914 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b"} err="failed to get container status \"ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b\": rpc error: code = NotFound desc = could not find container \"ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b\": container with ID starting with ea26e24bfa49d193df65541dcdfa5cb1af11fb9a8d26396ac09b6b77e021705b not found: ID does not exist" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.570543 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7348f568-f0cd-463d-a688-5b6042466073" (UID: "7348f568-f0cd-463d-a688-5b6042466073"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.640439 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.640474 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md677\" (UniqueName: \"kubernetes.io/projected/7348f568-f0cd-463d-a688-5b6042466073-kube-api-access-md677\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.640485 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7348f568-f0cd-463d-a688-5b6042466073-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.794625 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2g5z"] Feb 28 09:49:29 crc kubenswrapper[4996]: I0228 09:49:29.804794 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x2g5z"] Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.048943 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7348f568-f0cd-463d-a688-5b6042466073" path="/var/lib/kubelet/pods/7348f568-f0cd-463d-a688-5b6042466073/volumes" Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.106381 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hnvt"] Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.485808 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2hnvt" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerName="registry-server" containerID="cri-o://73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1" gracePeriod=2 Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.961639 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.990836 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj5dw\" (UniqueName: \"kubernetes.io/projected/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-kube-api-access-nj5dw\") pod \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.991084 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-catalog-content\") pod \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.991152 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-utilities\") pod \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\" (UID: \"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58\") " Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.993116 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-utilities" (OuterVolumeSpecName: "utilities") pod "1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" (UID: "1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:49:31 crc kubenswrapper[4996]: I0228 09:49:31.998665 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-kube-api-access-nj5dw" (OuterVolumeSpecName: "kube-api-access-nj5dw") pod "1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" (UID: "1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58"). InnerVolumeSpecName "kube-api-access-nj5dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.040436 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" (UID: "1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.093175 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.093730 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.093918 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj5dw\" (UniqueName: \"kubernetes.io/projected/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58-kube-api-access-nj5dw\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.501988 4996 generic.go:334] "Generic (PLEG): container finished" podID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerID="73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1" exitCode=0 Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.502085 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hnvt" event={"ID":"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58","Type":"ContainerDied","Data":"73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1"} Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.502136 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hnvt" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.502161 4996 scope.go:117] "RemoveContainer" containerID="73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.502142 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hnvt" event={"ID":"1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58","Type":"ContainerDied","Data":"29473a2d123dfd203353182946480ffea24f564ac50b5b30ba4d7a457e33a371"} Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.522423 4996 scope.go:117] "RemoveContainer" containerID="4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.563178 4996 scope.go:117] "RemoveContainer" containerID="fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.572107 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hnvt"] Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.583350 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2hnvt"] Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.612447 4996 scope.go:117] "RemoveContainer" containerID="73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1" Feb 28 09:49:32 crc kubenswrapper[4996]: E0228 09:49:32.612825 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1\": container with ID starting with 73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1 not found: ID does not exist" containerID="73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.612867 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1"} err="failed to get container status \"73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1\": rpc error: code = NotFound desc = could not find container \"73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1\": container with ID starting with 73f6ae03c1becf1d154bcc1b55b450632262766a53a3eabd51a603beeb803ff1 not found: ID does not exist" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.612896 4996 scope.go:117] "RemoveContainer" containerID="4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c" Feb 28 09:49:32 crc kubenswrapper[4996]: E0228 09:49:32.613500 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c\": container with ID starting with 4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c not found: ID does not exist" containerID="4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.613555 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c"} err="failed to get container status \"4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c\": rpc error: code = NotFound desc = could not find container \"4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c\": container with ID starting with 4c96ea6f78946bbedb1fdb7924f12337a2dabef1702da8b190c9cf49ae9dea0c not found: ID does not exist" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.613575 4996 scope.go:117] "RemoveContainer" containerID="fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083" Feb 28 09:49:32 crc kubenswrapper[4996]: E0228 09:49:32.613919 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083\": container with ID starting with fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083 not found: ID does not exist" containerID="fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083" Feb 28 09:49:32 crc kubenswrapper[4996]: I0228 09:49:32.613988 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083"} err="failed to get container status \"fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083\": rpc error: code = NotFound desc = could not find container \"fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083\": container with ID starting with fdeb3c43213c17993c70f844cebd7cfebec2d1617732e54a487d0e3b440b9083 not found: ID does not exist" Feb 28 09:49:33 crc kubenswrapper[4996]: I0228 09:49:33.051805 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" path="/var/lib/kubelet/pods/1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58/volumes" Feb 28 09:49:37 crc kubenswrapper[4996]: I0228 09:49:37.325861 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:37 crc kubenswrapper[4996]: I0228 09:49:37.396576 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:37 crc kubenswrapper[4996]: I0228 09:49:37.577269 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k6jfm"] Feb 28 09:49:38 crc kubenswrapper[4996]: I0228 09:49:38.573099 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k6jfm" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="registry-server" containerID="cri-o://3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73" gracePeriod=2 Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.110519 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.152334 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-catalog-content\") pod \"bab35c20-1bc4-48d9-9560-d3515b94b955\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.152423 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfnmg\" (UniqueName: \"kubernetes.io/projected/bab35c20-1bc4-48d9-9560-d3515b94b955-kube-api-access-rfnmg\") pod \"bab35c20-1bc4-48d9-9560-d3515b94b955\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.152464 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-utilities\") pod \"bab35c20-1bc4-48d9-9560-d3515b94b955\" (UID: \"bab35c20-1bc4-48d9-9560-d3515b94b955\") " Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.153360 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-utilities" (OuterVolumeSpecName: "utilities") pod "bab35c20-1bc4-48d9-9560-d3515b94b955" (UID: "bab35c20-1bc4-48d9-9560-d3515b94b955"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.158329 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab35c20-1bc4-48d9-9560-d3515b94b955-kube-api-access-rfnmg" (OuterVolumeSpecName: "kube-api-access-rfnmg") pod "bab35c20-1bc4-48d9-9560-d3515b94b955" (UID: "bab35c20-1bc4-48d9-9560-d3515b94b955"). InnerVolumeSpecName "kube-api-access-rfnmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.255072 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfnmg\" (UniqueName: \"kubernetes.io/projected/bab35c20-1bc4-48d9-9560-d3515b94b955-kube-api-access-rfnmg\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.255101 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.282197 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bab35c20-1bc4-48d9-9560-d3515b94b955" (UID: "bab35c20-1bc4-48d9-9560-d3515b94b955"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.356778 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab35c20-1bc4-48d9-9560-d3515b94b955-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.588415 4996 generic.go:334] "Generic (PLEG): container finished" podID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerID="3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73" exitCode=0 Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.588492 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k6jfm" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.588512 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6jfm" event={"ID":"bab35c20-1bc4-48d9-9560-d3515b94b955","Type":"ContainerDied","Data":"3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73"} Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.589138 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k6jfm" event={"ID":"bab35c20-1bc4-48d9-9560-d3515b94b955","Type":"ContainerDied","Data":"1e81e68ebeef83f1a14e204a34461fc4cb0f3e7c3dda7e9adb09bbb1a093c8a6"} Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.589225 4996 scope.go:117] "RemoveContainer" containerID="3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.616866 4996 scope.go:117] "RemoveContainer" containerID="a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.635149 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k6jfm"] Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.644263 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k6jfm"] Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.655523 4996 scope.go:117] "RemoveContainer" containerID="4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.710743 4996 scope.go:117] "RemoveContainer" containerID="3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73" Feb 28 09:49:39 crc kubenswrapper[4996]: E0228 09:49:39.711207 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73\": container with ID starting with 3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73 not found: ID does not exist" containerID="3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.711240 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73"} err="failed to get container status \"3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73\": rpc error: code = NotFound desc = could not find container \"3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73\": container with ID starting with 3ae2bd3f8b47d0ee66a020c436450295eab54eda47287b48742123d5436b1a73 not found: ID does not exist" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.711268 4996 scope.go:117] "RemoveContainer" containerID="a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6" Feb 28 09:49:39 crc kubenswrapper[4996]: E0228 09:49:39.711749 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6\": container with ID starting with a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6 not found: ID does not exist" containerID="a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.711854 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6"} err="failed to get container status \"a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6\": rpc error: code = NotFound desc = could not find container \"a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6\": container with ID starting with a7f4d06b348fe8561d1f7b2dd12c17383bdeea879b1d24f51bc695a1f0af40f6 not found: ID does not exist" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.711937 4996 scope.go:117] "RemoveContainer" containerID="4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa" Feb 28 09:49:39 crc kubenswrapper[4996]: E0228 09:49:39.712311 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa\": container with ID starting with 4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa not found: ID does not exist" containerID="4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa" Feb 28 09:49:39 crc kubenswrapper[4996]: I0228 09:49:39.712358 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa"} err="failed to get container status \"4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa\": rpc error: code = NotFound desc = could not find container \"4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa\": container with ID starting with 4104567a9ea77f72ddf59717ee234658df21f4bd0aa2eb3b5bb49edc2cbb1afa not found: ID does not exist" Feb 28 09:49:41 crc kubenswrapper[4996]: I0228 09:49:41.055122 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" path="/var/lib/kubelet/pods/bab35c20-1bc4-48d9-9560-d3515b94b955/volumes" Feb 28 09:49:42 crc kubenswrapper[4996]: I0228 09:49:42.249468 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:49:42 crc kubenswrapper[4996]: I0228 09:49:42.251390 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.154743 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537870-m7jr8"] Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155684 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="extract-utilities" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155698 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="extract-utilities" Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155710 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerName="extract-utilities" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155716 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerName="extract-utilities" Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155734 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7348f568-f0cd-463d-a688-5b6042466073" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155739 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7348f568-f0cd-463d-a688-5b6042466073" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155756 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7348f568-f0cd-463d-a688-5b6042466073" containerName="extract-utilities" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155762 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7348f568-f0cd-463d-a688-5b6042466073" containerName="extract-utilities" Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155773 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerName="extract-content" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155779 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerName="extract-content" Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155790 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="extract-content" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155795 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="extract-content" Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155805 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155811 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155819 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155825 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: E0228 09:50:00.155833 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7348f568-f0cd-463d-a688-5b6042466073" containerName="extract-content" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155838 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="7348f568-f0cd-463d-a688-5b6042466073" containerName="extract-content" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155988 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="7348f568-f0cd-463d-a688-5b6042466073" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.155999 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a0b7b80-b7ed-4cf0-ae6e-87b2d3083f58" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.156030 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab35c20-1bc4-48d9-9560-d3515b94b955" containerName="registry-server" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.156570 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537870-m7jr8" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.159823 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.159990 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.160353 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.170768 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537870-m7jr8"] Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.323972 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xb5v\" (UniqueName: \"kubernetes.io/projected/fb6e9c9b-6236-4fb1-b30b-607678ad604a-kube-api-access-9xb5v\") pod \"auto-csr-approver-29537870-m7jr8\" (UID: \"fb6e9c9b-6236-4fb1-b30b-607678ad604a\") " pod="openshift-infra/auto-csr-approver-29537870-m7jr8" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.426334 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xb5v\" (UniqueName: \"kubernetes.io/projected/fb6e9c9b-6236-4fb1-b30b-607678ad604a-kube-api-access-9xb5v\") pod \"auto-csr-approver-29537870-m7jr8\" (UID: \"fb6e9c9b-6236-4fb1-b30b-607678ad604a\") " pod="openshift-infra/auto-csr-approver-29537870-m7jr8" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.451343 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xb5v\" (UniqueName: \"kubernetes.io/projected/fb6e9c9b-6236-4fb1-b30b-607678ad604a-kube-api-access-9xb5v\") pod \"auto-csr-approver-29537870-m7jr8\" (UID: \"fb6e9c9b-6236-4fb1-b30b-607678ad604a\") " pod="openshift-infra/auto-csr-approver-29537870-m7jr8" Feb 28 09:50:00 crc kubenswrapper[4996]: I0228 09:50:00.478616 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537870-m7jr8" Feb 28 09:50:01 crc kubenswrapper[4996]: W0228 09:50:01.048715 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb6e9c9b_6236_4fb1_b30b_607678ad604a.slice/crio-b86802749d7bf988427cbb378233d74f3f4293b83d7229f6b4c9b44be019285e WatchSource:0}: Error finding container b86802749d7bf988427cbb378233d74f3f4293b83d7229f6b4c9b44be019285e: Status 404 returned error can't find the container with id b86802749d7bf988427cbb378233d74f3f4293b83d7229f6b4c9b44be019285e Feb 28 09:50:01 crc kubenswrapper[4996]: I0228 09:50:01.055440 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537870-m7jr8"] Feb 28 09:50:01 crc kubenswrapper[4996]: I0228 09:50:01.822441 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537870-m7jr8" event={"ID":"fb6e9c9b-6236-4fb1-b30b-607678ad604a","Type":"ContainerStarted","Data":"b86802749d7bf988427cbb378233d74f3f4293b83d7229f6b4c9b44be019285e"} Feb 28 09:50:02 crc kubenswrapper[4996]: I0228 09:50:02.835999 4996 generic.go:334] "Generic (PLEG): container finished" podID="fb6e9c9b-6236-4fb1-b30b-607678ad604a" containerID="15a0bf747a4bf1f74adb22d61c035c7df94c367207a99fb1c8f043c8d9206922" exitCode=0 Feb 28 09:50:02 crc kubenswrapper[4996]: I0228 09:50:02.836163 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537870-m7jr8" event={"ID":"fb6e9c9b-6236-4fb1-b30b-607678ad604a","Type":"ContainerDied","Data":"15a0bf747a4bf1f74adb22d61c035c7df94c367207a99fb1c8f043c8d9206922"} Feb 28 09:50:04 crc kubenswrapper[4996]: I0228 09:50:04.255311 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537870-m7jr8" Feb 28 09:50:04 crc kubenswrapper[4996]: I0228 09:50:04.405728 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xb5v\" (UniqueName: \"kubernetes.io/projected/fb6e9c9b-6236-4fb1-b30b-607678ad604a-kube-api-access-9xb5v\") pod \"fb6e9c9b-6236-4fb1-b30b-607678ad604a\" (UID: \"fb6e9c9b-6236-4fb1-b30b-607678ad604a\") " Feb 28 09:50:04 crc kubenswrapper[4996]: I0228 09:50:04.415201 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb6e9c9b-6236-4fb1-b30b-607678ad604a-kube-api-access-9xb5v" (OuterVolumeSpecName: "kube-api-access-9xb5v") pod "fb6e9c9b-6236-4fb1-b30b-607678ad604a" (UID: "fb6e9c9b-6236-4fb1-b30b-607678ad604a"). InnerVolumeSpecName "kube-api-access-9xb5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:50:04 crc kubenswrapper[4996]: I0228 09:50:04.508996 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xb5v\" (UniqueName: \"kubernetes.io/projected/fb6e9c9b-6236-4fb1-b30b-607678ad604a-kube-api-access-9xb5v\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:04 crc kubenswrapper[4996]: I0228 09:50:04.861973 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537870-m7jr8" event={"ID":"fb6e9c9b-6236-4fb1-b30b-607678ad604a","Type":"ContainerDied","Data":"b86802749d7bf988427cbb378233d74f3f4293b83d7229f6b4c9b44be019285e"} Feb 28 09:50:04 crc kubenswrapper[4996]: I0228 09:50:04.862058 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86802749d7bf988427cbb378233d74f3f4293b83d7229f6b4c9b44be019285e" Feb 28 09:50:04 crc kubenswrapper[4996]: I0228 09:50:04.862083 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537870-m7jr8" Feb 28 09:50:05 crc kubenswrapper[4996]: I0228 09:50:05.347575 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537864-zkmxj"] Feb 28 09:50:05 crc kubenswrapper[4996]: I0228 09:50:05.360133 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537864-zkmxj"] Feb 28 09:50:07 crc kubenswrapper[4996]: I0228 09:50:07.045673 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c958c3-1720-4797-8e3f-edea267640f5" path="/var/lib/kubelet/pods/06c958c3-1720-4797-8e3f-edea267640f5/volumes" Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.248602 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.249405 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.249562 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.250922 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.251071 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" gracePeriod=600 Feb 28 09:50:12 crc kubenswrapper[4996]: E0228 09:50:12.392884 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.949398 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" exitCode=0 Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.949448 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50"} Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.949486 4996 scope.go:117] "RemoveContainer" containerID="e348f5b02c06338b03f35074df638af610b737fdc8d3323ed8608d12b2b3077c" Feb 28 09:50:12 crc kubenswrapper[4996]: I0228 09:50:12.950392 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:50:12 crc kubenswrapper[4996]: E0228 09:50:12.950693 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:50:27 crc kubenswrapper[4996]: I0228 09:50:27.042471 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:50:27 crc kubenswrapper[4996]: E0228 09:50:27.043505 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:50:42 crc kubenswrapper[4996]: I0228 09:50:42.033804 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:50:42 crc kubenswrapper[4996]: E0228 09:50:42.034699 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:50:45 crc kubenswrapper[4996]: I0228 09:50:45.582287 4996 scope.go:117] "RemoveContainer" containerID="7e8e92a238a9bcda3977578ab2683f3cef35af29e678d515f8f6d299cdd66cb6" Feb 28 09:50:53 crc kubenswrapper[4996]: I0228 09:50:53.034292 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:50:53 crc kubenswrapper[4996]: E0228 09:50:53.035478 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:50:56 crc kubenswrapper[4996]: I0228 09:50:56.444364 4996 generic.go:334] "Generic (PLEG): container finished" podID="1d56c0f7-03f9-4035-b2d2-ef6d77821940" containerID="fbf05b8bf0deca478ce80f9fe16f21bea65e33002c26776a548874b97ed3bf6c" exitCode=0 Feb 28 09:50:56 crc kubenswrapper[4996]: I0228 09:50:56.444518 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" event={"ID":"1d56c0f7-03f9-4035-b2d2-ef6d77821940","Type":"ContainerDied","Data":"fbf05b8bf0deca478ce80f9fe16f21bea65e33002c26776a548874b97ed3bf6c"} Feb 28 09:50:57 crc kubenswrapper[4996]: I0228 09:50:57.846686 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.029944 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ssh-key-openstack-edpm-ipam\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030048 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-0\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030070 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph-nova-0\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030088 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-1\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030121 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-3\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030154 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64bdz\" (UniqueName: \"kubernetes.io/projected/1d56c0f7-03f9-4035-b2d2-ef6d77821940-kube-api-access-64bdz\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030184 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-extra-config-0\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030216 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-inventory\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030237 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-0\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030263 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030286 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-1\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030368 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-custom-ceph-combined-ca-bundle\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.030395 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-2\") pod \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\" (UID: \"1d56c0f7-03f9-4035-b2d2-ef6d77821940\") " Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.037632 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph" (OuterVolumeSpecName: "ceph") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.038456 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d56c0f7-03f9-4035-b2d2-ef6d77821940-kube-api-access-64bdz" (OuterVolumeSpecName: "kube-api-access-64bdz") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "kube-api-access-64bdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.061570 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.064099 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.065159 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.070644 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.071605 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.072551 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.074236 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.079693 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.083050 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.085035 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-inventory" (OuterVolumeSpecName: "inventory") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.092517 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1d56c0f7-03f9-4035-b2d2-ef6d77821940" (UID: "1d56c0f7-03f9-4035-b2d2-ef6d77821940"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131798 4996 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131830 4996 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131841 4996 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131850 4996 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131893 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64bdz\" (UniqueName: \"kubernetes.io/projected/1d56c0f7-03f9-4035-b2d2-ef6d77821940-kube-api-access-64bdz\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131905 4996 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131917 4996 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131925 4996 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131934 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131942 4996 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131952 4996 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131961 4996 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.131969 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1d56c0f7-03f9-4035-b2d2-ef6d77821940-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.464732 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" event={"ID":"1d56c0f7-03f9-4035-b2d2-ef6d77821940","Type":"ContainerDied","Data":"a1e820e6bb0931084c2c496d3fde1b40b3a190c6a050e23ea28a1b1b38307bd0"} Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.464774 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1e820e6bb0931084c2c496d3fde1b40b3a190c6a050e23ea28a1b1b38307bd0" Feb 28 09:50:58 crc kubenswrapper[4996]: I0228 09:50:58.464835 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm" Feb 28 09:51:04 crc kubenswrapper[4996]: I0228 09:51:04.033886 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:51:04 crc kubenswrapper[4996]: E0228 09:51:04.035234 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.179168 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 28 09:51:13 crc kubenswrapper[4996]: E0228 09:51:13.179937 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb6e9c9b-6236-4fb1-b30b-607678ad604a" containerName="oc" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.179950 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb6e9c9b-6236-4fb1-b30b-607678ad604a" containerName="oc" Feb 28 09:51:13 crc kubenswrapper[4996]: E0228 09:51:13.179970 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d56c0f7-03f9-4035-b2d2-ef6d77821940" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.179978 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d56c0f7-03f9-4035-b2d2-ef6d77821940" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.180165 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d56c0f7-03f9-4035-b2d2-ef6d77821940" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.180189 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb6e9c9b-6236-4fb1-b30b-607678ad604a" containerName="oc" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.181068 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.183665 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.185099 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.200077 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.208744 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.210384 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.213527 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259756 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259815 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8xfz\" (UniqueName: \"kubernetes.io/projected/d40d2784-2f7e-4cde-bb71-ff077d54ea57-kube-api-access-b8xfz\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259848 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259874 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259901 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-scripts\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259926 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259956 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-sys\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259974 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16eb7691-5159-4f12-88d5-79d8e9b902b2-ceph\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.259996 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260061 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-nvme\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260091 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260123 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260143 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260165 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75fp\" (UniqueName: \"kubernetes.io/projected/16eb7691-5159-4f12-88d5-79d8e9b902b2-kube-api-access-f75fp\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260188 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260216 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d40d2784-2f7e-4cde-bb71-ff077d54ea57-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260240 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260259 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260280 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-lib-modules\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260301 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-dev\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260327 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260353 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-config-data-custom\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260391 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-run\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260410 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260434 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260454 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260482 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260507 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260532 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-run\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260575 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260597 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-config-data\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.260632 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.263425 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361167 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-run\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361211 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361235 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361252 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361273 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361292 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361306 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-run\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361302 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-run\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361336 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361373 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-config-data\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361400 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361429 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361457 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8xfz\" (UniqueName: \"kubernetes.io/projected/d40d2784-2f7e-4cde-bb71-ff077d54ea57-kube-api-access-b8xfz\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361481 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-dev\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361541 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361553 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-run\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361553 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361458 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361618 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361623 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361635 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361784 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-scripts\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361810 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361837 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-sys\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361851 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16eb7691-5159-4f12-88d5-79d8e9b902b2-ceph\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361866 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-nvme\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361879 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361880 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361899 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361922 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361937 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361952 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75fp\" (UniqueName: \"kubernetes.io/projected/16eb7691-5159-4f12-88d5-79d8e9b902b2-kube-api-access-f75fp\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361969 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.361987 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d40d2784-2f7e-4cde-bb71-ff077d54ea57-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362009 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362036 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362052 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-lib-modules\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362067 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-dev\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362088 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362107 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-config-data-custom\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362125 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362250 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362258 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-sys\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362283 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-sys\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362305 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362369 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-nvme\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362407 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-lib-modules\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362411 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362432 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362434 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d40d2784-2f7e-4cde-bb71-ff077d54ea57-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362453 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-dev\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.362480 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/16eb7691-5159-4f12-88d5-79d8e9b902b2-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.368948 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-config-data-custom\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.373963 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-config-data\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.378364 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-scripts\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.378419 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.378432 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/16eb7691-5159-4f12-88d5-79d8e9b902b2-ceph\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.378826 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d40d2784-2f7e-4cde-bb71-ff077d54ea57-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.378944 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16eb7691-5159-4f12-88d5-79d8e9b902b2-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.379196 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.379427 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.379796 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d40d2784-2f7e-4cde-bb71-ff077d54ea57-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.380879 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8xfz\" (UniqueName: \"kubernetes.io/projected/d40d2784-2f7e-4cde-bb71-ff077d54ea57-kube-api-access-b8xfz\") pod \"cinder-volume-volume1-0\" (UID: \"d40d2784-2f7e-4cde-bb71-ff077d54ea57\") " pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.382743 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75fp\" (UniqueName: \"kubernetes.io/projected/16eb7691-5159-4f12-88d5-79d8e9b902b2-kube-api-access-f75fp\") pod \"cinder-backup-0\" (UID: \"16eb7691-5159-4f12-88d5-79d8e9b902b2\") " pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.501242 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.534587 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.655996 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-gjcc8"] Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.657494 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.669483 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8296\" (UniqueName: \"kubernetes.io/projected/a5fc5e08-1d74-4a25-b7f5-824b82c70591-kube-api-access-j8296\") pod \"manila-db-create-gjcc8\" (UID: \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\") " pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.679189 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-gjcc8"] Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.682910 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5fc5e08-1d74-4a25-b7f5-824b82c70591-operator-scripts\") pod \"manila-db-create-gjcc8\" (UID: \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\") " pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.753008 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-3817-account-create-update-24mt4"] Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.754140 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.758506 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.768727 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3817-account-create-update-24mt4"] Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.786460 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg9t2\" (UniqueName: \"kubernetes.io/projected/05a033eb-bb26-4a33-88a3-3c7e2099329b-kube-api-access-hg9t2\") pod \"manila-3817-account-create-update-24mt4\" (UID: \"05a033eb-bb26-4a33-88a3-3c7e2099329b\") " pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.786525 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8296\" (UniqueName: \"kubernetes.io/projected/a5fc5e08-1d74-4a25-b7f5-824b82c70591-kube-api-access-j8296\") pod \"manila-db-create-gjcc8\" (UID: \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\") " pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.786570 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a033eb-bb26-4a33-88a3-3c7e2099329b-operator-scripts\") pod \"manila-3817-account-create-update-24mt4\" (UID: \"05a033eb-bb26-4a33-88a3-3c7e2099329b\") " pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.786634 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5fc5e08-1d74-4a25-b7f5-824b82c70591-operator-scripts\") pod \"manila-db-create-gjcc8\" (UID: \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\") " pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.787710 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5fc5e08-1d74-4a25-b7f5-824b82c70591-operator-scripts\") pod \"manila-db-create-gjcc8\" (UID: \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\") " pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.809717 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8296\" (UniqueName: \"kubernetes.io/projected/a5fc5e08-1d74-4a25-b7f5-824b82c70591-kube-api-access-j8296\") pod \"manila-db-create-gjcc8\" (UID: \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\") " pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.891341 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg9t2\" (UniqueName: \"kubernetes.io/projected/05a033eb-bb26-4a33-88a3-3c7e2099329b-kube-api-access-hg9t2\") pod \"manila-3817-account-create-update-24mt4\" (UID: \"05a033eb-bb26-4a33-88a3-3c7e2099329b\") " pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.891424 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a033eb-bb26-4a33-88a3-3c7e2099329b-operator-scripts\") pod \"manila-3817-account-create-update-24mt4\" (UID: \"05a033eb-bb26-4a33-88a3-3c7e2099329b\") " pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.892093 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a033eb-bb26-4a33-88a3-3c7e2099329b-operator-scripts\") pod \"manila-3817-account-create-update-24mt4\" (UID: \"05a033eb-bb26-4a33-88a3-3c7e2099329b\") " pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.913934 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg9t2\" (UniqueName: \"kubernetes.io/projected/05a033eb-bb26-4a33-88a3-3c7e2099329b-kube-api-access-hg9t2\") pod \"manila-3817-account-create-update-24mt4\" (UID: \"05a033eb-bb26-4a33-88a3-3c7e2099329b\") " pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:13 crc kubenswrapper[4996]: I0228 09:51:13.983755 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.054238 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.056174 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.058913 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.058926 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7whlv" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.058984 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.059138 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.068576 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.076149 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.095377 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.095688 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.095856 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7bb241a-bbf4-499a-b203-d51d32c8964d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.096044 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7bb241a-bbf4-499a-b203-d51d32c8964d-logs\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.096313 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mpf\" (UniqueName: \"kubernetes.io/projected/d7bb241a-bbf4-499a-b203-d51d32c8964d-kube-api-access-t2mpf\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.096508 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.096717 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.096937 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d7bb241a-bbf4-499a-b203-d51d32c8964d-ceph\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.097106 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.140175 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.142312 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.146699 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.146949 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.152553 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199076 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d7bb241a-bbf4-499a-b203-d51d32c8964d-ceph\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199250 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199324 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199344 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199410 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7bb241a-bbf4-499a-b203-d51d32c8964d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199440 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7bb241a-bbf4-499a-b203-d51d32c8964d-logs\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199497 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2mpf\" (UniqueName: \"kubernetes.io/projected/d7bb241a-bbf4-499a-b203-d51d32c8964d-kube-api-access-t2mpf\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199518 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.199594 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.200753 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7bb241a-bbf4-499a-b203-d51d32c8964d-logs\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.201081 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.203515 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7bb241a-bbf4-499a-b203-d51d32c8964d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.209439 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.209642 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-scripts\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.210117 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/d7bb241a-bbf4-499a-b203-d51d32c8964d-ceph\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.218664 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.219449 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7bb241a-bbf4-499a-b203-d51d32c8964d-config-data\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.222328 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2mpf\" (UniqueName: \"kubernetes.io/projected/d7bb241a-bbf4-499a-b203-d51d32c8964d-kube-api-access-t2mpf\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.231414 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Feb 28 09:51:14 crc kubenswrapper[4996]: W0228 09:51:14.233394 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd40d2784_2f7e_4cde_bb71_ff077d54ea57.slice/crio-1b65399455003fc4bf6ec50c010d94cc7c888381121bbf7591b5295bf8196979 WatchSource:0}: Error finding container 1b65399455003fc4bf6ec50c010d94cc7c888381121bbf7591b5295bf8196979: Status 404 returned error can't find the container with id 1b65399455003fc4bf6ec50c010d94cc7c888381121bbf7591b5295bf8196979 Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.258274 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"d7bb241a-bbf4-499a-b203-d51d32c8964d\") " pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.274458 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301502 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6608d2cf-7157-45c2-9a82-99354bf88cee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301582 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301631 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mszjc\" (UniqueName: \"kubernetes.io/projected/6608d2cf-7157-45c2-9a82-99354bf88cee-kube-api-access-mszjc\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301671 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6608d2cf-7157-45c2-9a82-99354bf88cee-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301703 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301753 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301793 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301823 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.301843 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6608d2cf-7157-45c2-9a82-99354bf88cee-logs\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.380941 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407270 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407325 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407363 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407397 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407423 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6608d2cf-7157-45c2-9a82-99354bf88cee-logs\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407481 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6608d2cf-7157-45c2-9a82-99354bf88cee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407524 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407550 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mszjc\" (UniqueName: \"kubernetes.io/projected/6608d2cf-7157-45c2-9a82-99354bf88cee-kube-api-access-mszjc\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.407585 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6608d2cf-7157-45c2-9a82-99354bf88cee-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.408325 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6608d2cf-7157-45c2-9a82-99354bf88cee-logs\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.408400 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6608d2cf-7157-45c2-9a82-99354bf88cee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.409249 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.414361 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.418533 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.418775 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.424260 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6608d2cf-7157-45c2-9a82-99354bf88cee-ceph\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.426545 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6608d2cf-7157-45c2-9a82-99354bf88cee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.430641 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mszjc\" (UniqueName: \"kubernetes.io/projected/6608d2cf-7157-45c2-9a82-99354bf88cee-kube-api-access-mszjc\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.476525 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"6608d2cf-7157-45c2-9a82-99354bf88cee\") " pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.530711 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.561071 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-gjcc8"] Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.623411 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-gjcc8" event={"ID":"a5fc5e08-1d74-4a25-b7f5-824b82c70591","Type":"ContainerStarted","Data":"8874b762a73e7c41a3d131adb7a2db681dcba84ce2b849cdfeac2f27dd681418"} Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.625018 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"16eb7691-5159-4f12-88d5-79d8e9b902b2","Type":"ContainerStarted","Data":"b367c3e602644de6f86f1c6f9eab306feda6c9d5d0b75a127bac7a4d6d18d8da"} Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.626116 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d40d2784-2f7e-4cde-bb71-ff077d54ea57","Type":"ContainerStarted","Data":"1b65399455003fc4bf6ec50c010d94cc7c888381121bbf7591b5295bf8196979"} Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.667141 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-3817-account-create-update-24mt4"] Feb 28 09:51:14 crc kubenswrapper[4996]: I0228 09:51:14.915570 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 09:51:15 crc kubenswrapper[4996]: W0228 09:51:15.059292 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6608d2cf_7157_45c2_9a82_99354bf88cee.slice/crio-cf5aecc46621ecc5320a1da67416e2a7e56ffe4d151a16db5229ef5b1b9774e3 WatchSource:0}: Error finding container cf5aecc46621ecc5320a1da67416e2a7e56ffe4d151a16db5229ef5b1b9774e3: Status 404 returned error can't find the container with id cf5aecc46621ecc5320a1da67416e2a7e56ffe4d151a16db5229ef5b1b9774e3 Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.059360 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.649620 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6608d2cf-7157-45c2-9a82-99354bf88cee","Type":"ContainerStarted","Data":"cf5aecc46621ecc5320a1da67416e2a7e56ffe4d151a16db5229ef5b1b9774e3"} Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.658404 4996 generic.go:334] "Generic (PLEG): container finished" podID="05a033eb-bb26-4a33-88a3-3c7e2099329b" containerID="e00d1a365d9cacd2c0e22d7b97375f207200e1df16e1fe4be87252115a83ead0" exitCode=0 Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.658466 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3817-account-create-update-24mt4" event={"ID":"05a033eb-bb26-4a33-88a3-3c7e2099329b","Type":"ContainerDied","Data":"e00d1a365d9cacd2c0e22d7b97375f207200e1df16e1fe4be87252115a83ead0"} Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.658494 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3817-account-create-update-24mt4" event={"ID":"05a033eb-bb26-4a33-88a3-3c7e2099329b","Type":"ContainerStarted","Data":"548f5a5c5557a2530dc5562d9f485fe16e8c2d0418afe09b8c7944c1dbfa3e1c"} Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.670198 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-gjcc8" event={"ID":"a5fc5e08-1d74-4a25-b7f5-824b82c70591","Type":"ContainerDied","Data":"ba64155110b18c9cfb0773d00f2bdd9b8c041cae7f68bc6bcd57a86737e4ad9f"} Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.670208 4996 generic.go:334] "Generic (PLEG): container finished" podID="a5fc5e08-1d74-4a25-b7f5-824b82c70591" containerID="ba64155110b18c9cfb0773d00f2bdd9b8c041cae7f68bc6bcd57a86737e4ad9f" exitCode=0 Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.676162 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7bb241a-bbf4-499a-b203-d51d32c8964d","Type":"ContainerStarted","Data":"d69aaa64edd89b737a96b2d901820ee345cb96fe837fb9b95c8baa4a7c813b77"} Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.686453 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"16eb7691-5159-4f12-88d5-79d8e9b902b2","Type":"ContainerStarted","Data":"83061f803c870e42d4b65d0d48f3b8b7fd0f794b225bc960258f8afd0c6ff536"} Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.697283 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d40d2784-2f7e-4cde-bb71-ff077d54ea57","Type":"ContainerStarted","Data":"4992a65c3ff8e7e91aa515aa1ea04d9cf34b8a92e78206ec2390909d6104054f"} Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.726523 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=1.964757286 podStartE2EDuration="2.726507584s" podCreationTimestamp="2026-02-28 09:51:13 +0000 UTC" firstStartedPulling="2026-02-28 09:51:14.301694689 +0000 UTC m=+3037.992497500" lastFinishedPulling="2026-02-28 09:51:15.063444987 +0000 UTC m=+3038.754247798" observedRunningTime="2026-02-28 09:51:15.720384905 +0000 UTC m=+3039.411187716" watchObservedRunningTime="2026-02-28 09:51:15.726507584 +0000 UTC m=+3039.417310395" Feb 28 09:51:15 crc kubenswrapper[4996]: I0228 09:51:15.748537 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=1.909067151 podStartE2EDuration="2.748521419s" podCreationTimestamp="2026-02-28 09:51:13 +0000 UTC" firstStartedPulling="2026-02-28 09:51:14.235237752 +0000 UTC m=+3037.926040563" lastFinishedPulling="2026-02-28 09:51:15.07469202 +0000 UTC m=+3038.765494831" observedRunningTime="2026-02-28 09:51:15.736421305 +0000 UTC m=+3039.427224126" watchObservedRunningTime="2026-02-28 09:51:15.748521419 +0000 UTC m=+3039.439324230" Feb 28 09:51:16 crc kubenswrapper[4996]: I0228 09:51:16.706178 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"d40d2784-2f7e-4cde-bb71-ff077d54ea57","Type":"ContainerStarted","Data":"b8c7bad8e674ad0173b0a981e123586e6a3947bd12f50533ee44aa565b37eaf5"} Feb 28 09:51:16 crc kubenswrapper[4996]: I0228 09:51:16.709027 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6608d2cf-7157-45c2-9a82-99354bf88cee","Type":"ContainerStarted","Data":"ae1fe54be4e452095d0d2a22d560da1b1534011ca8a8f36074afb0f8394f61f9"} Feb 28 09:51:16 crc kubenswrapper[4996]: I0228 09:51:16.709073 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6608d2cf-7157-45c2-9a82-99354bf88cee","Type":"ContainerStarted","Data":"412778d57bf1f254869b7b712175f5037d5a5a46d188047947add15ecb8308fc"} Feb 28 09:51:16 crc kubenswrapper[4996]: I0228 09:51:16.710497 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7bb241a-bbf4-499a-b203-d51d32c8964d","Type":"ContainerStarted","Data":"4753bd0ba0999834df0d39d33cf041ef7ca623b26a1470c30a6c5af240127d68"} Feb 28 09:51:16 crc kubenswrapper[4996]: I0228 09:51:16.710540 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d7bb241a-bbf4-499a-b203-d51d32c8964d","Type":"ContainerStarted","Data":"e93121fe108ddfa7be828043e6b4282db0cbdf04b534609e96c6303db28cdec7"} Feb 28 09:51:16 crc kubenswrapper[4996]: I0228 09:51:16.712035 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"16eb7691-5159-4f12-88d5-79d8e9b902b2","Type":"ContainerStarted","Data":"ef3ad1ecc781537ae62a6b8cc5f2e17491925f1a35413a726812804a137057d9"} Feb 28 09:51:16 crc kubenswrapper[4996]: I0228 09:51:16.742283 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.742268413 podStartE2EDuration="3.742268413s" podCreationTimestamp="2026-02-28 09:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:51:16.738884611 +0000 UTC m=+3040.429687432" watchObservedRunningTime="2026-02-28 09:51:16.742268413 +0000 UTC m=+3040.433071224" Feb 28 09:51:16 crc kubenswrapper[4996]: I0228 09:51:16.765114 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.765097789 podStartE2EDuration="3.765097789s" podCreationTimestamp="2026-02-28 09:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:51:16.761592463 +0000 UTC m=+3040.452395274" watchObservedRunningTime="2026-02-28 09:51:16.765097789 +0000 UTC m=+3040.455900590" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.083871 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.109785 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.179610 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a033eb-bb26-4a33-88a3-3c7e2099329b-operator-scripts\") pod \"05a033eb-bb26-4a33-88a3-3c7e2099329b\" (UID: \"05a033eb-bb26-4a33-88a3-3c7e2099329b\") " Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.180517 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05a033eb-bb26-4a33-88a3-3c7e2099329b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "05a033eb-bb26-4a33-88a3-3c7e2099329b" (UID: "05a033eb-bb26-4a33-88a3-3c7e2099329b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.180656 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg9t2\" (UniqueName: \"kubernetes.io/projected/05a033eb-bb26-4a33-88a3-3c7e2099329b-kube-api-access-hg9t2\") pod \"05a033eb-bb26-4a33-88a3-3c7e2099329b\" (UID: \"05a033eb-bb26-4a33-88a3-3c7e2099329b\") " Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.181180 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/05a033eb-bb26-4a33-88a3-3c7e2099329b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.185533 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a033eb-bb26-4a33-88a3-3c7e2099329b-kube-api-access-hg9t2" (OuterVolumeSpecName: "kube-api-access-hg9t2") pod "05a033eb-bb26-4a33-88a3-3c7e2099329b" (UID: "05a033eb-bb26-4a33-88a3-3c7e2099329b"). InnerVolumeSpecName "kube-api-access-hg9t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.282692 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8296\" (UniqueName: \"kubernetes.io/projected/a5fc5e08-1d74-4a25-b7f5-824b82c70591-kube-api-access-j8296\") pod \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\" (UID: \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\") " Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.282871 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5fc5e08-1d74-4a25-b7f5-824b82c70591-operator-scripts\") pod \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\" (UID: \"a5fc5e08-1d74-4a25-b7f5-824b82c70591\") " Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.283298 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg9t2\" (UniqueName: \"kubernetes.io/projected/05a033eb-bb26-4a33-88a3-3c7e2099329b-kube-api-access-hg9t2\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.283954 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5fc5e08-1d74-4a25-b7f5-824b82c70591-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a5fc5e08-1d74-4a25-b7f5-824b82c70591" (UID: "a5fc5e08-1d74-4a25-b7f5-824b82c70591"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.287720 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fc5e08-1d74-4a25-b7f5-824b82c70591-kube-api-access-j8296" (OuterVolumeSpecName: "kube-api-access-j8296") pod "a5fc5e08-1d74-4a25-b7f5-824b82c70591" (UID: "a5fc5e08-1d74-4a25-b7f5-824b82c70591"). InnerVolumeSpecName "kube-api-access-j8296". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.385135 4996 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a5fc5e08-1d74-4a25-b7f5-824b82c70591-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.385169 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8296\" (UniqueName: \"kubernetes.io/projected/a5fc5e08-1d74-4a25-b7f5-824b82c70591-kube-api-access-j8296\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.723680 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-3817-account-create-update-24mt4" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.724567 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-3817-account-create-update-24mt4" event={"ID":"05a033eb-bb26-4a33-88a3-3c7e2099329b","Type":"ContainerDied","Data":"548f5a5c5557a2530dc5562d9f485fe16e8c2d0418afe09b8c7944c1dbfa3e1c"} Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.724967 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="548f5a5c5557a2530dc5562d9f485fe16e8c2d0418afe09b8c7944c1dbfa3e1c" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.726214 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-gjcc8" event={"ID":"a5fc5e08-1d74-4a25-b7f5-824b82c70591","Type":"ContainerDied","Data":"8874b762a73e7c41a3d131adb7a2db681dcba84ce2b849cdfeac2f27dd681418"} Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.726253 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8874b762a73e7c41a3d131adb7a2db681dcba84ce2b849cdfeac2f27dd681418" Feb 28 09:51:17 crc kubenswrapper[4996]: I0228 09:51:17.726308 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-gjcc8" Feb 28 09:51:18 crc kubenswrapper[4996]: I0228 09:51:18.501513 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:18 crc kubenswrapper[4996]: I0228 09:51:18.535652 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.324286 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-cd4qw"] Feb 28 09:51:19 crc kubenswrapper[4996]: E0228 09:51:19.324907 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a033eb-bb26-4a33-88a3-3c7e2099329b" containerName="mariadb-account-create-update" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.324920 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a033eb-bb26-4a33-88a3-3c7e2099329b" containerName="mariadb-account-create-update" Feb 28 09:51:19 crc kubenswrapper[4996]: E0228 09:51:19.324937 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fc5e08-1d74-4a25-b7f5-824b82c70591" containerName="mariadb-database-create" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.324943 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fc5e08-1d74-4a25-b7f5-824b82c70591" containerName="mariadb-database-create" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.325231 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a033eb-bb26-4a33-88a3-3c7e2099329b" containerName="mariadb-account-create-update" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.325246 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fc5e08-1d74-4a25-b7f5-824b82c70591" containerName="mariadb-database-create" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.326426 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.329222 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.329568 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-qw5mv" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.341743 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-cd4qw"] Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.421844 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-job-config-data\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.421883 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-config-data\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.421972 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-combined-ca-bundle\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.422010 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwwg\" (UniqueName: \"kubernetes.io/projected/f9647fc5-2585-46e2-aa04-045cdeb86e5c-kube-api-access-blwwg\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.523411 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-combined-ca-bundle\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.523475 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blwwg\" (UniqueName: \"kubernetes.io/projected/f9647fc5-2585-46e2-aa04-045cdeb86e5c-kube-api-access-blwwg\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.523595 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-job-config-data\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.523617 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-config-data\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.533823 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-job-config-data\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.545819 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-combined-ca-bundle\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.547819 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-config-data\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.590076 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blwwg\" (UniqueName: \"kubernetes.io/projected/f9647fc5-2585-46e2-aa04-045cdeb86e5c-kube-api-access-blwwg\") pod \"manila-db-sync-cd4qw\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:19 crc kubenswrapper[4996]: I0228 09:51:19.654436 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:20 crc kubenswrapper[4996]: I0228 09:51:20.033541 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:51:20 crc kubenswrapper[4996]: E0228 09:51:20.034139 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:51:20 crc kubenswrapper[4996]: I0228 09:51:20.171494 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-cd4qw"] Feb 28 09:51:20 crc kubenswrapper[4996]: W0228 09:51:20.172315 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9647fc5_2585_46e2_aa04_045cdeb86e5c.slice/crio-8eb0b8b8ed6ceb386ba62134dfde54730bb60b6e9a166e932960225fb7f7a82f WatchSource:0}: Error finding container 8eb0b8b8ed6ceb386ba62134dfde54730bb60b6e9a166e932960225fb7f7a82f: Status 404 returned error can't find the container with id 8eb0b8b8ed6ceb386ba62134dfde54730bb60b6e9a166e932960225fb7f7a82f Feb 28 09:51:20 crc kubenswrapper[4996]: I0228 09:51:20.754368 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-cd4qw" event={"ID":"f9647fc5-2585-46e2-aa04-045cdeb86e5c","Type":"ContainerStarted","Data":"8eb0b8b8ed6ceb386ba62134dfde54730bb60b6e9a166e932960225fb7f7a82f"} Feb 28 09:51:23 crc kubenswrapper[4996]: I0228 09:51:23.724791 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Feb 28 09:51:23 crc kubenswrapper[4996]: I0228 09:51:23.751418 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.381418 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.381493 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.423151 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.425295 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.531643 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.531707 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.559828 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.573952 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.807143 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.807418 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.807429 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:24 crc kubenswrapper[4996]: I0228 09:51:24.807439 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:25 crc kubenswrapper[4996]: I0228 09:51:25.815515 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-cd4qw" event={"ID":"f9647fc5-2585-46e2-aa04-045cdeb86e5c","Type":"ContainerStarted","Data":"fa4ade38c652a1c82f075efa6dc80097763091072fe97137327c3926e21a08d4"} Feb 28 09:51:25 crc kubenswrapper[4996]: I0228 09:51:25.842290 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-cd4qw" podStartSLOduration=2.3709317260000002 podStartE2EDuration="6.842270671s" podCreationTimestamp="2026-02-28 09:51:19 +0000 UTC" firstStartedPulling="2026-02-28 09:51:20.173848524 +0000 UTC m=+3043.864651335" lastFinishedPulling="2026-02-28 09:51:24.645187459 +0000 UTC m=+3048.335990280" observedRunningTime="2026-02-28 09:51:25.833660992 +0000 UTC m=+3049.524463793" watchObservedRunningTime="2026-02-28 09:51:25.842270671 +0000 UTC m=+3049.533073482" Feb 28 09:51:26 crc kubenswrapper[4996]: I0228 09:51:26.822842 4996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 09:51:26 crc kubenswrapper[4996]: I0228 09:51:26.824019 4996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 09:51:27 crc kubenswrapper[4996]: I0228 09:51:27.199808 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:27 crc kubenswrapper[4996]: I0228 09:51:27.199956 4996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 09:51:27 crc kubenswrapper[4996]: I0228 09:51:27.203501 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 09:51:27 crc kubenswrapper[4996]: I0228 09:51:27.213099 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 09:51:27 crc kubenswrapper[4996]: I0228 09:51:27.240610 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 09:51:32 crc kubenswrapper[4996]: I0228 09:51:32.034201 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:51:32 crc kubenswrapper[4996]: E0228 09:51:32.035446 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:51:35 crc kubenswrapper[4996]: I0228 09:51:35.925341 4996 generic.go:334] "Generic (PLEG): container finished" podID="f9647fc5-2585-46e2-aa04-045cdeb86e5c" containerID="fa4ade38c652a1c82f075efa6dc80097763091072fe97137327c3926e21a08d4" exitCode=0 Feb 28 09:51:35 crc kubenswrapper[4996]: I0228 09:51:35.925473 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-cd4qw" event={"ID":"f9647fc5-2585-46e2-aa04-045cdeb86e5c","Type":"ContainerDied","Data":"fa4ade38c652a1c82f075efa6dc80097763091072fe97137327c3926e21a08d4"} Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.484047 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.612655 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blwwg\" (UniqueName: \"kubernetes.io/projected/f9647fc5-2585-46e2-aa04-045cdeb86e5c-kube-api-access-blwwg\") pod \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.613613 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-combined-ca-bundle\") pod \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.613717 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-config-data\") pod \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.613806 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-job-config-data\") pod \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\" (UID: \"f9647fc5-2585-46e2-aa04-045cdeb86e5c\") " Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.619019 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9647fc5-2585-46e2-aa04-045cdeb86e5c-kube-api-access-blwwg" (OuterVolumeSpecName: "kube-api-access-blwwg") pod "f9647fc5-2585-46e2-aa04-045cdeb86e5c" (UID: "f9647fc5-2585-46e2-aa04-045cdeb86e5c"). InnerVolumeSpecName "kube-api-access-blwwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.621446 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "f9647fc5-2585-46e2-aa04-045cdeb86e5c" (UID: "f9647fc5-2585-46e2-aa04-045cdeb86e5c"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.622653 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-config-data" (OuterVolumeSpecName: "config-data") pod "f9647fc5-2585-46e2-aa04-045cdeb86e5c" (UID: "f9647fc5-2585-46e2-aa04-045cdeb86e5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.650789 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9647fc5-2585-46e2-aa04-045cdeb86e5c" (UID: "f9647fc5-2585-46e2-aa04-045cdeb86e5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.716956 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blwwg\" (UniqueName: \"kubernetes.io/projected/f9647fc5-2585-46e2-aa04-045cdeb86e5c-kube-api-access-blwwg\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.717027 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.717043 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.717054 4996 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f9647fc5-2585-46e2-aa04-045cdeb86e5c-job-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.950812 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-cd4qw" event={"ID":"f9647fc5-2585-46e2-aa04-045cdeb86e5c","Type":"ContainerDied","Data":"8eb0b8b8ed6ceb386ba62134dfde54730bb60b6e9a166e932960225fb7f7a82f"} Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.950873 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb0b8b8ed6ceb386ba62134dfde54730bb60b6e9a166e932960225fb7f7a82f" Feb 28 09:51:37 crc kubenswrapper[4996]: I0228 09:51:37.950961 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-cd4qw" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.464722 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:51:38 crc kubenswrapper[4996]: E0228 09:51:38.465255 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9647fc5-2585-46e2-aa04-045cdeb86e5c" containerName="manila-db-sync" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.465281 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9647fc5-2585-46e2-aa04-045cdeb86e5c" containerName="manila-db-sync" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.465561 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9647fc5-2585-46e2-aa04-045cdeb86e5c" containerName="manila-db-sync" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.466817 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.472989 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.473097 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.473135 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-qw5mv" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.473377 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.473662 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.475709 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.477808 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.521212 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533070 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533141 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533178 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4288\" (UniqueName: \"kubernetes.io/projected/5605f8e7-a9fc-4784-ba4c-d5bb38984650-kube-api-access-f4288\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533228 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533251 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533280 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5605f8e7-a9fc-4784-ba4c-d5bb38984650-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533316 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533337 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-scripts\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533397 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533416 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-scripts\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533485 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533525 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8tn\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-kube-api-access-ln8tn\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533546 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.533578 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-ceph\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.546835 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-t5mnf"] Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.548346 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.570343 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.628095 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-t5mnf"] Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635065 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635121 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-config\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635155 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635179 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8tn\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-kube-api-access-ln8tn\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635202 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635229 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-ceph\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635311 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635346 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635378 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4288\" (UniqueName: \"kubernetes.io/projected/5605f8e7-a9fc-4784-ba4c-d5bb38984650-kube-api-access-f4288\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635412 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635450 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635474 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635500 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5605f8e7-a9fc-4784-ba4c-d5bb38984650-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635533 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635556 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-scripts\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635581 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635660 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635683 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-scripts\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635710 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.635738 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6tf\" (UniqueName: \"kubernetes.io/projected/3bea2fd5-b365-4936-a700-6810be669d7b-kube-api-access-dt6tf\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.636018 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5605f8e7-a9fc-4784-ba4c-d5bb38984650-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.636451 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.636810 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.646348 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-scripts\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.651212 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.654168 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.660597 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.662771 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4288\" (UniqueName: \"kubernetes.io/projected/5605f8e7-a9fc-4784-ba4c-d5bb38984650-kube-api-access-f4288\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.666766 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.677561 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-scripts\") pod \"manila-scheduler-0\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.677670 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8tn\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-kube-api-access-ln8tn\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.679123 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.679231 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-ceph\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.679478 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.737114 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.737175 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6tf\" (UniqueName: \"kubernetes.io/projected/3bea2fd5-b365-4936-a700-6810be669d7b-kube-api-access-dt6tf\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.737206 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.737229 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-config\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.737312 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.737359 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.738243 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.740674 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.740893 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-config\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.741116 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.742926 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3bea2fd5-b365-4936-a700-6810be669d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.764765 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.783732 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.785918 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.793330 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.793909 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6tf\" (UniqueName: \"kubernetes.io/projected/3bea2fd5-b365-4936-a700-6810be669d7b-kube-api-access-dt6tf\") pod \"dnsmasq-dns-76b5fdb995-t5mnf\" (UID: \"3bea2fd5-b365-4936-a700-6810be669d7b\") " pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.806158 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.806881 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.875181 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data-custom\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.875274 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.875361 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c76e71-6825-4f3c-b93d-ffb708cb6f00-logs\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.875382 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.875417 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c76e71-6825-4f3c-b93d-ffb708cb6f00-etc-machine-id\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.875465 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhd6h\" (UniqueName: \"kubernetes.io/projected/07c76e71-6825-4f3c-b93d-ffb708cb6f00-kube-api-access-nhd6h\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.875523 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-scripts\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.876955 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.978577 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c76e71-6825-4f3c-b93d-ffb708cb6f00-logs\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.978633 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.978666 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c76e71-6825-4f3c-b93d-ffb708cb6f00-etc-machine-id\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.978709 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhd6h\" (UniqueName: \"kubernetes.io/projected/07c76e71-6825-4f3c-b93d-ffb708cb6f00-kube-api-access-nhd6h\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.978756 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-scripts\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.978781 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data-custom\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.978822 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.979100 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c76e71-6825-4f3c-b93d-ffb708cb6f00-etc-machine-id\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.979420 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c76e71-6825-4f3c-b93d-ffb708cb6f00-logs\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.982815 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.990459 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data-custom\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:38 crc kubenswrapper[4996]: I0228 09:51:38.990784 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:39 crc kubenswrapper[4996]: I0228 09:51:39.002928 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-scripts\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:39 crc kubenswrapper[4996]: I0228 09:51:39.007643 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhd6h\" (UniqueName: \"kubernetes.io/projected/07c76e71-6825-4f3c-b93d-ffb708cb6f00-kube-api-access-nhd6h\") pod \"manila-api-0\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " pod="openstack/manila-api-0" Feb 28 09:51:39 crc kubenswrapper[4996]: I0228 09:51:39.276992 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 28 09:51:39 crc kubenswrapper[4996]: I0228 09:51:39.481399 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:51:39 crc kubenswrapper[4996]: W0228 09:51:39.484244 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5605f8e7_a9fc_4784_ba4c_d5bb38984650.slice/crio-9f9b757d3a81a26f2a241d5278bf8a79b17e601bbac92910ef9a411a473da668 WatchSource:0}: Error finding container 9f9b757d3a81a26f2a241d5278bf8a79b17e601bbac92910ef9a411a473da668: Status 404 returned error can't find the container with id 9f9b757d3a81a26f2a241d5278bf8a79b17e601bbac92910ef9a411a473da668 Feb 28 09:51:39 crc kubenswrapper[4996]: I0228 09:51:39.558858 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-t5mnf"] Feb 28 09:51:39 crc kubenswrapper[4996]: I0228 09:51:39.577148 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:51:39 crc kubenswrapper[4996]: I0228 09:51:39.893297 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:39 crc kubenswrapper[4996]: W0228 09:51:39.899534 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c76e71_6825_4f3c_b93d_ffb708cb6f00.slice/crio-90c060d9a1556c77937de3dd13603e647a9343064eaf4d95bed24f710106362b WatchSource:0}: Error finding container 90c060d9a1556c77937de3dd13603e647a9343064eaf4d95bed24f710106362b: Status 404 returned error can't find the container with id 90c060d9a1556c77937de3dd13603e647a9343064eaf4d95bed24f710106362b Feb 28 09:51:40 crc kubenswrapper[4996]: I0228 09:51:40.001274 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07c76e71-6825-4f3c-b93d-ffb708cb6f00","Type":"ContainerStarted","Data":"90c060d9a1556c77937de3dd13603e647a9343064eaf4d95bed24f710106362b"} Feb 28 09:51:40 crc kubenswrapper[4996]: I0228 09:51:40.002760 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9b2d7765-9690-404b-a33d-8358a91d8f55","Type":"ContainerStarted","Data":"5b1c8e110f2d0697c42cc4864af656944c68de900512fd81758c40872316f526"} Feb 28 09:51:40 crc kubenswrapper[4996]: I0228 09:51:40.004432 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5605f8e7-a9fc-4784-ba4c-d5bb38984650","Type":"ContainerStarted","Data":"9f9b757d3a81a26f2a241d5278bf8a79b17e601bbac92910ef9a411a473da668"} Feb 28 09:51:40 crc kubenswrapper[4996]: I0228 09:51:40.005401 4996 generic.go:334] "Generic (PLEG): container finished" podID="3bea2fd5-b365-4936-a700-6810be669d7b" containerID="fe0f6813cf88d63b09273622a77730fc0f374975ec61fd15f571e68722bc5505" exitCode=0 Feb 28 09:51:40 crc kubenswrapper[4996]: I0228 09:51:40.005428 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" event={"ID":"3bea2fd5-b365-4936-a700-6810be669d7b","Type":"ContainerDied","Data":"fe0f6813cf88d63b09273622a77730fc0f374975ec61fd15f571e68722bc5505"} Feb 28 09:51:40 crc kubenswrapper[4996]: I0228 09:51:40.005443 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" event={"ID":"3bea2fd5-b365-4936-a700-6810be669d7b","Type":"ContainerStarted","Data":"f54780c2eecfbf7e1b91f13d9d4314ab31989d124ea4d35ac2d2f05e92fcc841"} Feb 28 09:51:41 crc kubenswrapper[4996]: I0228 09:51:41.024301 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5605f8e7-a9fc-4784-ba4c-d5bb38984650","Type":"ContainerStarted","Data":"60d2319ccdb05b07d7bf081959d91e85c0ccc32671a09a429e21c87c17279f51"} Feb 28 09:51:41 crc kubenswrapper[4996]: I0228 09:51:41.029123 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" event={"ID":"3bea2fd5-b365-4936-a700-6810be669d7b","Type":"ContainerStarted","Data":"835f57f9b54681759d307138f23ad646d6bacf43b782465104c0e4beb9f64e88"} Feb 28 09:51:41 crc kubenswrapper[4996]: I0228 09:51:41.029212 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:41 crc kubenswrapper[4996]: I0228 09:51:41.044752 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07c76e71-6825-4f3c-b93d-ffb708cb6f00","Type":"ContainerStarted","Data":"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd"} Feb 28 09:51:41 crc kubenswrapper[4996]: I0228 09:51:41.044800 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07c76e71-6825-4f3c-b93d-ffb708cb6f00","Type":"ContainerStarted","Data":"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e"} Feb 28 09:51:41 crc kubenswrapper[4996]: I0228 09:51:41.089157 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" podStartSLOduration=3.089132768 podStartE2EDuration="3.089132768s" podCreationTimestamp="2026-02-28 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:51:41.054768482 +0000 UTC m=+3064.745571293" watchObservedRunningTime="2026-02-28 09:51:41.089132768 +0000 UTC m=+3064.779935579" Feb 28 09:51:41 crc kubenswrapper[4996]: I0228 09:51:41.110786 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.110762725 podStartE2EDuration="3.110762725s" podCreationTimestamp="2026-02-28 09:51:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:51:41.072705529 +0000 UTC m=+3064.763508340" watchObservedRunningTime="2026-02-28 09:51:41.110762725 +0000 UTC m=+3064.801565536" Feb 28 09:51:41 crc kubenswrapper[4996]: I0228 09:51:41.280749 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:42 crc kubenswrapper[4996]: I0228 09:51:42.044545 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5605f8e7-a9fc-4784-ba4c-d5bb38984650","Type":"ContainerStarted","Data":"c9e289e1bb4b13aaadc2f39899d4d3c36bc4b5f12c3ef8968d162201851d09e9"} Feb 28 09:51:42 crc kubenswrapper[4996]: I0228 09:51:42.044760 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.059369 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerName="manila-api-log" containerID="cri-o://dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e" gracePeriod=30 Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.059722 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerName="manila-api" containerID="cri-o://99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd" gracePeriod=30 Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.437129 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.764616412 podStartE2EDuration="5.437111579s" podCreationTimestamp="2026-02-28 09:51:38 +0000 UTC" firstStartedPulling="2026-02-28 09:51:39.487235524 +0000 UTC m=+3063.178038335" lastFinishedPulling="2026-02-28 09:51:40.159730691 +0000 UTC m=+3063.850533502" observedRunningTime="2026-02-28 09:51:42.063345837 +0000 UTC m=+3065.754148668" watchObservedRunningTime="2026-02-28 09:51:43.437111579 +0000 UTC m=+3067.127914400" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.451389 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.451650 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="ceilometer-central-agent" containerID="cri-o://9936ab31f30a3a2bf50f424c98e0016438478ad28eeb33978444299b763526c8" gracePeriod=30 Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.451756 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="proxy-httpd" containerID="cri-o://65ddfaf80b45c7cdf6301b93729987e5149f963eb3997a48aca7ee97bae411dd" gracePeriod=30 Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.451856 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="sg-core" containerID="cri-o://78450d3cb1a0c958e1eef4c7ba69a5a74b429f76fdbd7177eb3a5a8af1eb9040" gracePeriod=30 Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.451852 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="ceilometer-notification-agent" containerID="cri-o://a78f257ae7e56d19ced48008d4c57e27d72c4b8f149524e6937626ee0814da34" gracePeriod=30 Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.717500 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.806061 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhd6h\" (UniqueName: \"kubernetes.io/projected/07c76e71-6825-4f3c-b93d-ffb708cb6f00-kube-api-access-nhd6h\") pod \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.806154 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data-custom\") pod \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.806196 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-combined-ca-bundle\") pod \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.806283 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c76e71-6825-4f3c-b93d-ffb708cb6f00-etc-machine-id\") pod \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.806344 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-scripts\") pod \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.806394 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data\") pod \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.806431 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c76e71-6825-4f3c-b93d-ffb708cb6f00-logs\") pod \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\" (UID: \"07c76e71-6825-4f3c-b93d-ffb708cb6f00\") " Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.807448 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07c76e71-6825-4f3c-b93d-ffb708cb6f00-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "07c76e71-6825-4f3c-b93d-ffb708cb6f00" (UID: "07c76e71-6825-4f3c-b93d-ffb708cb6f00"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.807593 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c76e71-6825-4f3c-b93d-ffb708cb6f00-logs" (OuterVolumeSpecName: "logs") pod "07c76e71-6825-4f3c-b93d-ffb708cb6f00" (UID: "07c76e71-6825-4f3c-b93d-ffb708cb6f00"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.813691 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c76e71-6825-4f3c-b93d-ffb708cb6f00-kube-api-access-nhd6h" (OuterVolumeSpecName: "kube-api-access-nhd6h") pod "07c76e71-6825-4f3c-b93d-ffb708cb6f00" (UID: "07c76e71-6825-4f3c-b93d-ffb708cb6f00"). InnerVolumeSpecName "kube-api-access-nhd6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.814254 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "07c76e71-6825-4f3c-b93d-ffb708cb6f00" (UID: "07c76e71-6825-4f3c-b93d-ffb708cb6f00"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.816657 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-scripts" (OuterVolumeSpecName: "scripts") pod "07c76e71-6825-4f3c-b93d-ffb708cb6f00" (UID: "07c76e71-6825-4f3c-b93d-ffb708cb6f00"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.847388 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07c76e71-6825-4f3c-b93d-ffb708cb6f00" (UID: "07c76e71-6825-4f3c-b93d-ffb708cb6f00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.898124 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data" (OuterVolumeSpecName: "config-data") pod "07c76e71-6825-4f3c-b93d-ffb708cb6f00" (UID: "07c76e71-6825-4f3c-b93d-ffb708cb6f00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.908760 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhd6h\" (UniqueName: \"kubernetes.io/projected/07c76e71-6825-4f3c-b93d-ffb708cb6f00-kube-api-access-nhd6h\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.908790 4996 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.908802 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.908812 4996 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07c76e71-6825-4f3c-b93d-ffb708cb6f00-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.908821 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.908829 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07c76e71-6825-4f3c-b93d-ffb708cb6f00-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:43 crc kubenswrapper[4996]: I0228 09:51:43.908838 4996 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07c76e71-6825-4f3c-b93d-ffb708cb6f00-logs\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.034519 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:51:44 crc kubenswrapper[4996]: E0228 09:51:44.035608 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.077138 4996 generic.go:334] "Generic (PLEG): container finished" podID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerID="65ddfaf80b45c7cdf6301b93729987e5149f963eb3997a48aca7ee97bae411dd" exitCode=0 Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.077171 4996 generic.go:334] "Generic (PLEG): container finished" podID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerID="78450d3cb1a0c958e1eef4c7ba69a5a74b429f76fdbd7177eb3a5a8af1eb9040" exitCode=2 Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.077179 4996 generic.go:334] "Generic (PLEG): container finished" podID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerID="a78f257ae7e56d19ced48008d4c57e27d72c4b8f149524e6937626ee0814da34" exitCode=0 Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.077188 4996 generic.go:334] "Generic (PLEG): container finished" podID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerID="9936ab31f30a3a2bf50f424c98e0016438478ad28eeb33978444299b763526c8" exitCode=0 Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.077206 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerDied","Data":"65ddfaf80b45c7cdf6301b93729987e5149f963eb3997a48aca7ee97bae411dd"} Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.077256 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerDied","Data":"78450d3cb1a0c958e1eef4c7ba69a5a74b429f76fdbd7177eb3a5a8af1eb9040"} Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.077267 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerDied","Data":"a78f257ae7e56d19ced48008d4c57e27d72c4b8f149524e6937626ee0814da34"} Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.077278 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerDied","Data":"9936ab31f30a3a2bf50f424c98e0016438478ad28eeb33978444299b763526c8"} Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.079318 4996 generic.go:334] "Generic (PLEG): container finished" podID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerID="99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd" exitCode=0 Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.079347 4996 generic.go:334] "Generic (PLEG): container finished" podID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerID="dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e" exitCode=143 Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.079372 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.079370 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07c76e71-6825-4f3c-b93d-ffb708cb6f00","Type":"ContainerDied","Data":"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd"} Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.079509 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07c76e71-6825-4f3c-b93d-ffb708cb6f00","Type":"ContainerDied","Data":"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e"} Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.079534 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"07c76e71-6825-4f3c-b93d-ffb708cb6f00","Type":"ContainerDied","Data":"90c060d9a1556c77937de3dd13603e647a9343064eaf4d95bed24f710106362b"} Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.079553 4996 scope.go:117] "RemoveContainer" containerID="99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.114471 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.121886 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.131646 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:44 crc kubenswrapper[4996]: E0228 09:51:44.131994 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerName="manila-api-log" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.132024 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerName="manila-api-log" Feb 28 09:51:44 crc kubenswrapper[4996]: E0228 09:51:44.132038 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerName="manila-api" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.132044 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerName="manila-api" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.132231 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerName="manila-api-log" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.132246 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" containerName="manila-api" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.133412 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.134798 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.135443 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.135591 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.146766 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.315600 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f342b89-e95f-4811-a844-690bb97b8b32-logs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.315665 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5wqg\" (UniqueName: \"kubernetes.io/projected/7f342b89-e95f-4811-a844-690bb97b8b32-kube-api-access-w5wqg\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.315696 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-public-tls-certs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.315838 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-config-data\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.315889 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-config-data-custom\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.316053 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.316223 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-scripts\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.316367 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f342b89-e95f-4811-a844-690bb97b8b32-etc-machine-id\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.316423 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-internal-tls-certs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.418488 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.418826 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-scripts\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.418894 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f342b89-e95f-4811-a844-690bb97b8b32-etc-machine-id\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.418927 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-internal-tls-certs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.418970 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f342b89-e95f-4811-a844-690bb97b8b32-logs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.419035 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5wqg\" (UniqueName: \"kubernetes.io/projected/7f342b89-e95f-4811-a844-690bb97b8b32-kube-api-access-w5wqg\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.419071 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-public-tls-certs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.419122 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-config-data\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.419147 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-config-data-custom\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.419138 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7f342b89-e95f-4811-a844-690bb97b8b32-etc-machine-id\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.419524 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f342b89-e95f-4811-a844-690bb97b8b32-logs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.423545 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.423895 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-scripts\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.424717 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-public-tls-certs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.425294 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-internal-tls-certs\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.425650 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-config-data\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.429731 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f342b89-e95f-4811-a844-690bb97b8b32-config-data-custom\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.439700 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5wqg\" (UniqueName: \"kubernetes.io/projected/7f342b89-e95f-4811-a844-690bb97b8b32-kube-api-access-w5wqg\") pod \"manila-api-0\" (UID: \"7f342b89-e95f-4811-a844-690bb97b8b32\") " pod="openstack/manila-api-0" Feb 28 09:51:44 crc kubenswrapper[4996]: I0228 09:51:44.548438 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Feb 28 09:51:45 crc kubenswrapper[4996]: I0228 09:51:45.059091 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c76e71-6825-4f3c-b93d-ffb708cb6f00" path="/var/lib/kubelet/pods/07c76e71-6825-4f3c-b93d-ffb708cb6f00/volumes" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.810699 4996 scope.go:117] "RemoveContainer" containerID="dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.980594 4996 scope.go:117] "RemoveContainer" containerID="99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd" Feb 28 09:51:46 crc kubenswrapper[4996]: E0228 09:51:46.981657 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd\": container with ID starting with 99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd not found: ID does not exist" containerID="99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.981727 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd"} err="failed to get container status \"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd\": rpc error: code = NotFound desc = could not find container \"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd\": container with ID starting with 99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd not found: ID does not exist" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.981756 4996 scope.go:117] "RemoveContainer" containerID="dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e" Feb 28 09:51:46 crc kubenswrapper[4996]: E0228 09:51:46.982097 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e\": container with ID starting with dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e not found: ID does not exist" containerID="dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.982150 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e"} err="failed to get container status \"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e\": rpc error: code = NotFound desc = could not find container \"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e\": container with ID starting with dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e not found: ID does not exist" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.982168 4996 scope.go:117] "RemoveContainer" containerID="99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.982486 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd"} err="failed to get container status \"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd\": rpc error: code = NotFound desc = could not find container \"99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd\": container with ID starting with 99958165e5bc436f9e1fbb804b46add7246e58a8e301d53ab14d1b5b63f14fdd not found: ID does not exist" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.982508 4996 scope.go:117] "RemoveContainer" containerID="dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e" Feb 28 09:51:46 crc kubenswrapper[4996]: I0228 09:51:46.982913 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e"} err="failed to get container status \"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e\": rpc error: code = NotFound desc = could not find container \"dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e\": container with ID starting with dea7bf3a770740d212f57938211d32973e0f90de454e2775804096c5c698ed3e not found: ID does not exist" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.153386 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.280962 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-ceilometer-tls-certs\") pod \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.281077 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-log-httpd\") pod \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.281105 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jg6x\" (UniqueName: \"kubernetes.io/projected/044cb29c-1d87-46f7-b2d1-3d82f880eceb-kube-api-access-7jg6x\") pod \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.281181 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-config-data\") pod \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.281215 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-run-httpd\") pod \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.281247 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-scripts\") pod \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.281264 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-sg-core-conf-yaml\") pod \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.281370 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-combined-ca-bundle\") pod \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\" (UID: \"044cb29c-1d87-46f7-b2d1-3d82f880eceb\") " Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.283159 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "044cb29c-1d87-46f7-b2d1-3d82f880eceb" (UID: "044cb29c-1d87-46f7-b2d1-3d82f880eceb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.283304 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "044cb29c-1d87-46f7-b2d1-3d82f880eceb" (UID: "044cb29c-1d87-46f7-b2d1-3d82f880eceb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.288079 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/044cb29c-1d87-46f7-b2d1-3d82f880eceb-kube-api-access-7jg6x" (OuterVolumeSpecName: "kube-api-access-7jg6x") pod "044cb29c-1d87-46f7-b2d1-3d82f880eceb" (UID: "044cb29c-1d87-46f7-b2d1-3d82f880eceb"). InnerVolumeSpecName "kube-api-access-7jg6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.295699 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-scripts" (OuterVolumeSpecName: "scripts") pod "044cb29c-1d87-46f7-b2d1-3d82f880eceb" (UID: "044cb29c-1d87-46f7-b2d1-3d82f880eceb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.313216 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "044cb29c-1d87-46f7-b2d1-3d82f880eceb" (UID: "044cb29c-1d87-46f7-b2d1-3d82f880eceb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.348410 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "044cb29c-1d87-46f7-b2d1-3d82f880eceb" (UID: "044cb29c-1d87-46f7-b2d1-3d82f880eceb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.380870 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "044cb29c-1d87-46f7-b2d1-3d82f880eceb" (UID: "044cb29c-1d87-46f7-b2d1-3d82f880eceb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.383153 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.383181 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.383192 4996 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.383202 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.383214 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jg6x\" (UniqueName: \"kubernetes.io/projected/044cb29c-1d87-46f7-b2d1-3d82f880eceb-kube-api-access-7jg6x\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.383224 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/044cb29c-1d87-46f7-b2d1-3d82f880eceb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.383232 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.396445 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.410380 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-config-data" (OuterVolumeSpecName: "config-data") pod "044cb29c-1d87-46f7-b2d1-3d82f880eceb" (UID: "044cb29c-1d87-46f7-b2d1-3d82f880eceb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:47 crc kubenswrapper[4996]: I0228 09:51:47.485392 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/044cb29c-1d87-46f7-b2d1-3d82f880eceb-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.144494 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7f342b89-e95f-4811-a844-690bb97b8b32","Type":"ContainerStarted","Data":"3bb500e4dbcfc9867b2f06e4194a104136c3448c9641a614ed1d7ae5ca56e486"} Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.145097 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7f342b89-e95f-4811-a844-690bb97b8b32","Type":"ContainerStarted","Data":"8adeb1d7986e7781c1a82eaa7901dbce2e7851d7e564b83dbd8226250f4ee12f"} Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.152101 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9b2d7765-9690-404b-a33d-8358a91d8f55","Type":"ContainerStarted","Data":"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02"} Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.152151 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9b2d7765-9690-404b-a33d-8358a91d8f55","Type":"ContainerStarted","Data":"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96"} Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.155568 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"044cb29c-1d87-46f7-b2d1-3d82f880eceb","Type":"ContainerDied","Data":"5d2e5b2fc074bd2b7bec750ea45328ef711b4ef8baae511e34ea44905f29553a"} Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.155615 4996 scope.go:117] "RemoveContainer" containerID="65ddfaf80b45c7cdf6301b93729987e5149f963eb3997a48aca7ee97bae411dd" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.155679 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.186350 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.913903214 podStartE2EDuration="10.186322686s" podCreationTimestamp="2026-02-28 09:51:38 +0000 UTC" firstStartedPulling="2026-02-28 09:51:39.567775725 +0000 UTC m=+3063.258578536" lastFinishedPulling="2026-02-28 09:51:46.840195197 +0000 UTC m=+3070.530998008" observedRunningTime="2026-02-28 09:51:48.182848842 +0000 UTC m=+3071.873651653" watchObservedRunningTime="2026-02-28 09:51:48.186322686 +0000 UTC m=+3071.877125527" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.209502 4996 scope.go:117] "RemoveContainer" containerID="78450d3cb1a0c958e1eef4c7ba69a5a74b429f76fdbd7177eb3a5a8af1eb9040" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.233394 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.245412 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.269691 4996 scope.go:117] "RemoveContainer" containerID="a78f257ae7e56d19ced48008d4c57e27d72c4b8f149524e6937626ee0814da34" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.272751 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:48 crc kubenswrapper[4996]: E0228 09:51:48.273704 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="proxy-httpd" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.273730 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="proxy-httpd" Feb 28 09:51:48 crc kubenswrapper[4996]: E0228 09:51:48.273777 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="ceilometer-central-agent" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.273788 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="ceilometer-central-agent" Feb 28 09:51:48 crc kubenswrapper[4996]: E0228 09:51:48.273819 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="sg-core" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.273829 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="sg-core" Feb 28 09:51:48 crc kubenswrapper[4996]: E0228 09:51:48.273859 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="ceilometer-notification-agent" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.273868 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="ceilometer-notification-agent" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.274547 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="ceilometer-notification-agent" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.274601 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="proxy-httpd" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.274631 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="ceilometer-central-agent" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.274646 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" containerName="sg-core" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.294313 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.296711 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.296915 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.298956 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.300031 4996 scope.go:117] "RemoveContainer" containerID="9936ab31f30a3a2bf50f424c98e0016438478ad28eeb33978444299b763526c8" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.301837 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.418032 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-log-httpd\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.418102 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-config-data\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.418191 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.418240 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdnzq\" (UniqueName: \"kubernetes.io/projected/430931fd-b732-4b92-a8b3-320e6a4c1118-kube-api-access-bdnzq\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.418264 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.418332 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-run-httpd\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.418360 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-scripts\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.418394 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.521108 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.521206 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdnzq\" (UniqueName: \"kubernetes.io/projected/430931fd-b732-4b92-a8b3-320e6a4c1118-kube-api-access-bdnzq\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.521240 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.521569 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-run-httpd\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.521618 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-scripts\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.521680 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.521823 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-log-httpd\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.521893 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-config-data\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.522956 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-run-httpd\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.523519 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-log-httpd\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.526988 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.529344 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-scripts\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.529863 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-config-data\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.531101 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.531278 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.541154 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdnzq\" (UniqueName: \"kubernetes.io/projected/430931fd-b732-4b92-a8b3-320e6a4c1118-kube-api-access-bdnzq\") pod \"ceilometer-0\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.624636 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.800743 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.808273 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.880707 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-t5mnf" Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.953033 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-l6k66"] Feb 28 09:51:48 crc kubenswrapper[4996]: I0228 09:51:48.953307 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" podUID="80cf7647-e72e-464f-960c-decb8700cb2d" containerName="dnsmasq-dns" containerID="cri-o://18b617ac2d3c338d299aa0cddf81e2f2d0ea50e7e022b2d203354045ffcfcc6c" gracePeriod=10 Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.047740 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="044cb29c-1d87-46f7-b2d1-3d82f880eceb" path="/var/lib/kubelet/pods/044cb29c-1d87-46f7-b2d1-3d82f880eceb/volumes" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.120520 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.184197 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerStarted","Data":"971987ec03e81de78298518349e37f4a340bd58df4fcfedb0a7a20ab92a642fd"} Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.189249 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"7f342b89-e95f-4811-a844-690bb97b8b32","Type":"ContainerStarted","Data":"88b1e3dd412abea5a37647af894562a6e1dbc6d324fd3bbcc11f710f3bd14605"} Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.189588 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.195353 4996 generic.go:334] "Generic (PLEG): container finished" podID="80cf7647-e72e-464f-960c-decb8700cb2d" containerID="18b617ac2d3c338d299aa0cddf81e2f2d0ea50e7e022b2d203354045ffcfcc6c" exitCode=0 Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.195420 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" event={"ID":"80cf7647-e72e-464f-960c-decb8700cb2d","Type":"ContainerDied","Data":"18b617ac2d3c338d299aa0cddf81e2f2d0ea50e7e022b2d203354045ffcfcc6c"} Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.222340 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=5.222324088 podStartE2EDuration="5.222324088s" podCreationTimestamp="2026-02-28 09:51:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:51:49.206274878 +0000 UTC m=+3072.897077689" watchObservedRunningTime="2026-02-28 09:51:49.222324088 +0000 UTC m=+3072.913126889" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.481577 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.656173 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-config\") pod \"80cf7647-e72e-464f-960c-decb8700cb2d\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.656240 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxjrb\" (UniqueName: \"kubernetes.io/projected/80cf7647-e72e-464f-960c-decb8700cb2d-kube-api-access-kxjrb\") pod \"80cf7647-e72e-464f-960c-decb8700cb2d\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.656349 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-nb\") pod \"80cf7647-e72e-464f-960c-decb8700cb2d\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.656498 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-openstack-edpm-ipam\") pod \"80cf7647-e72e-464f-960c-decb8700cb2d\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.656676 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-dns-svc\") pod \"80cf7647-e72e-464f-960c-decb8700cb2d\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.657143 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-sb\") pod \"80cf7647-e72e-464f-960c-decb8700cb2d\" (UID: \"80cf7647-e72e-464f-960c-decb8700cb2d\") " Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.660574 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80cf7647-e72e-464f-960c-decb8700cb2d-kube-api-access-kxjrb" (OuterVolumeSpecName: "kube-api-access-kxjrb") pod "80cf7647-e72e-464f-960c-decb8700cb2d" (UID: "80cf7647-e72e-464f-960c-decb8700cb2d"). InnerVolumeSpecName "kube-api-access-kxjrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.717782 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "80cf7647-e72e-464f-960c-decb8700cb2d" (UID: "80cf7647-e72e-464f-960c-decb8700cb2d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.726093 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-config" (OuterVolumeSpecName: "config") pod "80cf7647-e72e-464f-960c-decb8700cb2d" (UID: "80cf7647-e72e-464f-960c-decb8700cb2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.726372 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80cf7647-e72e-464f-960c-decb8700cb2d" (UID: "80cf7647-e72e-464f-960c-decb8700cb2d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.735054 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80cf7647-e72e-464f-960c-decb8700cb2d" (UID: "80cf7647-e72e-464f-960c-decb8700cb2d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.748434 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80cf7647-e72e-464f-960c-decb8700cb2d" (UID: "80cf7647-e72e-464f-960c-decb8700cb2d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.759489 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.759517 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.759528 4996 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.759536 4996 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.759546 4996 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80cf7647-e72e-464f-960c-decb8700cb2d-config\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:49 crc kubenswrapper[4996]: I0228 09:51:49.759555 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxjrb\" (UniqueName: \"kubernetes.io/projected/80cf7647-e72e-464f-960c-decb8700cb2d-kube-api-access-kxjrb\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:50 crc kubenswrapper[4996]: I0228 09:51:50.217158 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerStarted","Data":"6a1d879e7f2827f22fd92c0012d529b1393c68fa5abf8cdc68ca9f7f88919b51"} Feb 28 09:51:50 crc kubenswrapper[4996]: I0228 09:51:50.219756 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" event={"ID":"80cf7647-e72e-464f-960c-decb8700cb2d","Type":"ContainerDied","Data":"bbd02af6db264e77d647ab144702a7475f31de3054b296cd72331f4a45227405"} Feb 28 09:51:50 crc kubenswrapper[4996]: I0228 09:51:50.219832 4996 scope.go:117] "RemoveContainer" containerID="18b617ac2d3c338d299aa0cddf81e2f2d0ea50e7e022b2d203354045ffcfcc6c" Feb 28 09:51:50 crc kubenswrapper[4996]: I0228 09:51:50.220059 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-l6k66" Feb 28 09:51:50 crc kubenswrapper[4996]: I0228 09:51:50.267837 4996 scope.go:117] "RemoveContainer" containerID="75eb9fb63463ebfc71fcfbf0e23fe48ecad65a484623a790df872fbde2365683" Feb 28 09:51:50 crc kubenswrapper[4996]: I0228 09:51:50.304478 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-l6k66"] Feb 28 09:51:50 crc kubenswrapper[4996]: I0228 09:51:50.316665 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-l6k66"] Feb 28 09:51:51 crc kubenswrapper[4996]: I0228 09:51:51.043998 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80cf7647-e72e-464f-960c-decb8700cb2d" path="/var/lib/kubelet/pods/80cf7647-e72e-464f-960c-decb8700cb2d/volumes" Feb 28 09:51:51 crc kubenswrapper[4996]: I0228 09:51:51.277107 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerStarted","Data":"29173a48d707eefb4430dac909c1cdd8f3c9feea03167d72b1b5521aa4e92329"} Feb 28 09:51:52 crc kubenswrapper[4996]: I0228 09:51:52.297346 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerStarted","Data":"a76c7db3d3c2c0b4cbe43cf17fcc80cf4896a1b13196269be1d7e0405077c3a4"} Feb 28 09:51:52 crc kubenswrapper[4996]: I0228 09:51:52.722819 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:54 crc kubenswrapper[4996]: I0228 09:51:54.318923 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerStarted","Data":"7e05885ded1bcd86b6ee20ebe27f1591fb6b5d51a168c5adb3b1a63701cc7ab6"} Feb 28 09:51:54 crc kubenswrapper[4996]: I0228 09:51:54.319303 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:51:54 crc kubenswrapper[4996]: I0228 09:51:54.319209 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="ceilometer-central-agent" containerID="cri-o://6a1d879e7f2827f22fd92c0012d529b1393c68fa5abf8cdc68ca9f7f88919b51" gracePeriod=30 Feb 28 09:51:54 crc kubenswrapper[4996]: I0228 09:51:54.319303 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="sg-core" containerID="cri-o://a76c7db3d3c2c0b4cbe43cf17fcc80cf4896a1b13196269be1d7e0405077c3a4" gracePeriod=30 Feb 28 09:51:54 crc kubenswrapper[4996]: I0228 09:51:54.319255 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="ceilometer-notification-agent" containerID="cri-o://29173a48d707eefb4430dac909c1cdd8f3c9feea03167d72b1b5521aa4e92329" gracePeriod=30 Feb 28 09:51:54 crc kubenswrapper[4996]: I0228 09:51:54.319215 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="proxy-httpd" containerID="cri-o://7e05885ded1bcd86b6ee20ebe27f1591fb6b5d51a168c5adb3b1a63701cc7ab6" gracePeriod=30 Feb 28 09:51:54 crc kubenswrapper[4996]: I0228 09:51:54.360297 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.195955163 podStartE2EDuration="6.360259705s" podCreationTimestamp="2026-02-28 09:51:48 +0000 UTC" firstStartedPulling="2026-02-28 09:51:49.140552799 +0000 UTC m=+3072.831355610" lastFinishedPulling="2026-02-28 09:51:53.304857341 +0000 UTC m=+3076.995660152" observedRunningTime="2026-02-28 09:51:54.344970523 +0000 UTC m=+3078.035773384" watchObservedRunningTime="2026-02-28 09:51:54.360259705 +0000 UTC m=+3078.051062606" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.337245 4996 generic.go:334] "Generic (PLEG): container finished" podID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerID="7e05885ded1bcd86b6ee20ebe27f1591fb6b5d51a168c5adb3b1a63701cc7ab6" exitCode=0 Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.337716 4996 generic.go:334] "Generic (PLEG): container finished" podID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerID="a76c7db3d3c2c0b4cbe43cf17fcc80cf4896a1b13196269be1d7e0405077c3a4" exitCode=2 Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.337751 4996 generic.go:334] "Generic (PLEG): container finished" podID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerID="29173a48d707eefb4430dac909c1cdd8f3c9feea03167d72b1b5521aa4e92329" exitCode=0 Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.337772 4996 generic.go:334] "Generic (PLEG): container finished" podID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerID="6a1d879e7f2827f22fd92c0012d529b1393c68fa5abf8cdc68ca9f7f88919b51" exitCode=0 Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.337390 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerDied","Data":"7e05885ded1bcd86b6ee20ebe27f1591fb6b5d51a168c5adb3b1a63701cc7ab6"} Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.337844 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerDied","Data":"a76c7db3d3c2c0b4cbe43cf17fcc80cf4896a1b13196269be1d7e0405077c3a4"} Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.337883 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerDied","Data":"29173a48d707eefb4430dac909c1cdd8f3c9feea03167d72b1b5521aa4e92329"} Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.337915 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerDied","Data":"6a1d879e7f2827f22fd92c0012d529b1393c68fa5abf8cdc68ca9f7f88919b51"} Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.644875 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.779733 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-config-data\") pod \"430931fd-b732-4b92-a8b3-320e6a4c1118\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.779872 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-ceilometer-tls-certs\") pod \"430931fd-b732-4b92-a8b3-320e6a4c1118\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.779991 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-combined-ca-bundle\") pod \"430931fd-b732-4b92-a8b3-320e6a4c1118\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.780093 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-scripts\") pod \"430931fd-b732-4b92-a8b3-320e6a4c1118\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.780161 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdnzq\" (UniqueName: \"kubernetes.io/projected/430931fd-b732-4b92-a8b3-320e6a4c1118-kube-api-access-bdnzq\") pod \"430931fd-b732-4b92-a8b3-320e6a4c1118\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.780226 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-run-httpd\") pod \"430931fd-b732-4b92-a8b3-320e6a4c1118\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.780347 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-log-httpd\") pod \"430931fd-b732-4b92-a8b3-320e6a4c1118\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.780488 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-sg-core-conf-yaml\") pod \"430931fd-b732-4b92-a8b3-320e6a4c1118\" (UID: \"430931fd-b732-4b92-a8b3-320e6a4c1118\") " Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.780810 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "430931fd-b732-4b92-a8b3-320e6a4c1118" (UID: "430931fd-b732-4b92-a8b3-320e6a4c1118"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.780945 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "430931fd-b732-4b92-a8b3-320e6a4c1118" (UID: "430931fd-b732-4b92-a8b3-320e6a4c1118"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.781490 4996 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.781509 4996 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/430931fd-b732-4b92-a8b3-320e6a4c1118-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.785848 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/430931fd-b732-4b92-a8b3-320e6a4c1118-kube-api-access-bdnzq" (OuterVolumeSpecName: "kube-api-access-bdnzq") pod "430931fd-b732-4b92-a8b3-320e6a4c1118" (UID: "430931fd-b732-4b92-a8b3-320e6a4c1118"). InnerVolumeSpecName "kube-api-access-bdnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.786066 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-scripts" (OuterVolumeSpecName: "scripts") pod "430931fd-b732-4b92-a8b3-320e6a4c1118" (UID: "430931fd-b732-4b92-a8b3-320e6a4c1118"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.830260 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "430931fd-b732-4b92-a8b3-320e6a4c1118" (UID: "430931fd-b732-4b92-a8b3-320e6a4c1118"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.846064 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "430931fd-b732-4b92-a8b3-320e6a4c1118" (UID: "430931fd-b732-4b92-a8b3-320e6a4c1118"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.852913 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "430931fd-b732-4b92-a8b3-320e6a4c1118" (UID: "430931fd-b732-4b92-a8b3-320e6a4c1118"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.882926 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.882956 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdnzq\" (UniqueName: \"kubernetes.io/projected/430931fd-b732-4b92-a8b3-320e6a4c1118-kube-api-access-bdnzq\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.882970 4996 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.882980 4996 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.882991 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.895351 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-config-data" (OuterVolumeSpecName: "config-data") pod "430931fd-b732-4b92-a8b3-320e6a4c1118" (UID: "430931fd-b732-4b92-a8b3-320e6a4c1118"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:51:55 crc kubenswrapper[4996]: I0228 09:51:55.985238 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/430931fd-b732-4b92-a8b3-320e6a4c1118-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.353850 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"430931fd-b732-4b92-a8b3-320e6a4c1118","Type":"ContainerDied","Data":"971987ec03e81de78298518349e37f4a340bd58df4fcfedb0a7a20ab92a642fd"} Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.354208 4996 scope.go:117] "RemoveContainer" containerID="7e05885ded1bcd86b6ee20ebe27f1591fb6b5d51a168c5adb3b1a63701cc7ab6" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.353979 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.398933 4996 scope.go:117] "RemoveContainer" containerID="a76c7db3d3c2c0b4cbe43cf17fcc80cf4896a1b13196269be1d7e0405077c3a4" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.421831 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.463152 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.467026 4996 scope.go:117] "RemoveContainer" containerID="29173a48d707eefb4430dac909c1cdd8f3c9feea03167d72b1b5521aa4e92329" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.474920 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:56 crc kubenswrapper[4996]: E0228 09:51:56.475637 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="sg-core" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.475672 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="sg-core" Feb 28 09:51:56 crc kubenswrapper[4996]: E0228 09:51:56.475714 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="ceilometer-central-agent" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.475728 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="ceilometer-central-agent" Feb 28 09:51:56 crc kubenswrapper[4996]: E0228 09:51:56.475759 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="ceilometer-notification-agent" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.475773 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="ceilometer-notification-agent" Feb 28 09:51:56 crc kubenswrapper[4996]: E0228 09:51:56.475812 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="proxy-httpd" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.475826 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="proxy-httpd" Feb 28 09:51:56 crc kubenswrapper[4996]: E0228 09:51:56.475859 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cf7647-e72e-464f-960c-decb8700cb2d" containerName="dnsmasq-dns" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.475877 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cf7647-e72e-464f-960c-decb8700cb2d" containerName="dnsmasq-dns" Feb 28 09:51:56 crc kubenswrapper[4996]: E0228 09:51:56.475904 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80cf7647-e72e-464f-960c-decb8700cb2d" containerName="init" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.475919 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="80cf7647-e72e-464f-960c-decb8700cb2d" containerName="init" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.476391 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="ceilometer-central-agent" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.476454 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="sg-core" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.476475 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="ceilometer-notification-agent" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.476510 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="80cf7647-e72e-464f-960c-decb8700cb2d" containerName="dnsmasq-dns" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.476529 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" containerName="proxy-httpd" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.482726 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.489510 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.489726 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.489833 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.489898 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.508625 4996 scope.go:117] "RemoveContainer" containerID="6a1d879e7f2827f22fd92c0012d529b1393c68fa5abf8cdc68ca9f7f88919b51" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.599329 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-scripts\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.599437 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn8h7\" (UniqueName: \"kubernetes.io/projected/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-kube-api-access-dn8h7\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.599459 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-run-httpd\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.599475 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.599531 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.599563 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-log-httpd\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.599661 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-config-data\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.599739 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.702212 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn8h7\" (UniqueName: \"kubernetes.io/projected/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-kube-api-access-dn8h7\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.702547 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-run-httpd\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.702762 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.703041 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.703187 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-run-httpd\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.703455 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-log-httpd\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.703653 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-log-httpd\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.703933 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-config-data\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.704337 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.704596 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-scripts\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.708290 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.709269 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.709324 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-config-data\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.711982 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-scripts\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.724949 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.729741 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn8h7\" (UniqueName: \"kubernetes.io/projected/84f19b5a-912c-4a5d-a7f7-05d8a637bc1c-kube-api-access-dn8h7\") pod \"ceilometer-0\" (UID: \"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c\") " pod="openstack/ceilometer-0" Feb 28 09:51:56 crc kubenswrapper[4996]: I0228 09:51:56.807140 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 09:51:57 crc kubenswrapper[4996]: I0228 09:51:57.050934 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="430931fd-b732-4b92-a8b3-320e6a4c1118" path="/var/lib/kubelet/pods/430931fd-b732-4b92-a8b3-320e6a4c1118/volumes" Feb 28 09:51:57 crc kubenswrapper[4996]: I0228 09:51:57.267769 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 09:51:57 crc kubenswrapper[4996]: W0228 09:51:57.277413 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84f19b5a_912c_4a5d_a7f7_05d8a637bc1c.slice/crio-0e42bb6bdd0df742d3c104e45cd7761c983108f421f2d43a1a6740418f6647aa WatchSource:0}: Error finding container 0e42bb6bdd0df742d3c104e45cd7761c983108f421f2d43a1a6740418f6647aa: Status 404 returned error can't find the container with id 0e42bb6bdd0df742d3c104e45cd7761c983108f421f2d43a1a6740418f6647aa Feb 28 09:51:57 crc kubenswrapper[4996]: I0228 09:51:57.364898 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c","Type":"ContainerStarted","Data":"0e42bb6bdd0df742d3c104e45cd7761c983108f421f2d43a1a6740418f6647aa"} Feb 28 09:51:58 crc kubenswrapper[4996]: I0228 09:51:58.418325 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c","Type":"ContainerStarted","Data":"8f1d0e237469b056f3dbd4516a40b7db6ce8bdaa84a5d15e2dc171020caae6df"} Feb 28 09:51:59 crc kubenswrapper[4996]: I0228 09:51:59.433526 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c","Type":"ContainerStarted","Data":"ed9ea4436d6635cf3112e7af1f858c84f558b751300a4624a654cde7fa209ce4"} Feb 28 09:51:59 crc kubenswrapper[4996]: I0228 09:51:59.435504 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c","Type":"ContainerStarted","Data":"961fec8dbb3d18cde50d4861f82fb89937b5c65de13d84d36a365b7697f7933f"} Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.033915 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:52:00 crc kubenswrapper[4996]: E0228 09:52:00.034305 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.143463 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537872-kpqnp"] Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.145976 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537872-kpqnp" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.150605 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.150784 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.150952 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.154631 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537872-kpqnp"] Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.278024 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6jq\" (UniqueName: \"kubernetes.io/projected/16434a5d-8631-4313-83e4-8b100e810aca-kube-api-access-sc6jq\") pod \"auto-csr-approver-29537872-kpqnp\" (UID: \"16434a5d-8631-4313-83e4-8b100e810aca\") " pod="openshift-infra/auto-csr-approver-29537872-kpqnp" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.380642 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc6jq\" (UniqueName: \"kubernetes.io/projected/16434a5d-8631-4313-83e4-8b100e810aca-kube-api-access-sc6jq\") pod \"auto-csr-approver-29537872-kpqnp\" (UID: \"16434a5d-8631-4313-83e4-8b100e810aca\") " pod="openshift-infra/auto-csr-approver-29537872-kpqnp" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.409546 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.423507 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc6jq\" (UniqueName: \"kubernetes.io/projected/16434a5d-8631-4313-83e4-8b100e810aca-kube-api-access-sc6jq\") pod \"auto-csr-approver-29537872-kpqnp\" (UID: \"16434a5d-8631-4313-83e4-8b100e810aca\") " pod="openshift-infra/auto-csr-approver-29537872-kpqnp" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.481795 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537872-kpqnp" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.492596 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.492847 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerName="manila-scheduler" containerID="cri-o://60d2319ccdb05b07d7bf081959d91e85c0ccc32671a09a429e21c87c17279f51" gracePeriod=30 Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.493329 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerName="probe" containerID="cri-o://c9e289e1bb4b13aaadc2f39899d4d3c36bc4b5f12c3ef8968d162201851d09e9" gracePeriod=30 Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.519202 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.564911 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:52:00 crc kubenswrapper[4996]: I0228 09:52:00.970254 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537872-kpqnp"] Feb 28 09:52:00 crc kubenswrapper[4996]: W0228 09:52:00.970682 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16434a5d_8631_4313_83e4_8b100e810aca.slice/crio-723a50b33b81733de29de8bdeba3df5fa2d6467320220580dc413605f9105d94 WatchSource:0}: Error finding container 723a50b33b81733de29de8bdeba3df5fa2d6467320220580dc413605f9105d94: Status 404 returned error can't find the container with id 723a50b33b81733de29de8bdeba3df5fa2d6467320220580dc413605f9105d94 Feb 28 09:52:01 crc kubenswrapper[4996]: I0228 09:52:01.453889 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537872-kpqnp" event={"ID":"16434a5d-8631-4313-83e4-8b100e810aca","Type":"ContainerStarted","Data":"723a50b33b81733de29de8bdeba3df5fa2d6467320220580dc413605f9105d94"} Feb 28 09:52:01 crc kubenswrapper[4996]: I0228 09:52:01.456679 4996 generic.go:334] "Generic (PLEG): container finished" podID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerID="c9e289e1bb4b13aaadc2f39899d4d3c36bc4b5f12c3ef8968d162201851d09e9" exitCode=0 Feb 28 09:52:01 crc kubenswrapper[4996]: I0228 09:52:01.456769 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5605f8e7-a9fc-4784-ba4c-d5bb38984650","Type":"ContainerDied","Data":"c9e289e1bb4b13aaadc2f39899d4d3c36bc4b5f12c3ef8968d162201851d09e9"} Feb 28 09:52:01 crc kubenswrapper[4996]: I0228 09:52:01.459867 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"84f19b5a-912c-4a5d-a7f7-05d8a637bc1c","Type":"ContainerStarted","Data":"bd1336a4f6f344f66cd38c5eb6f4f48de0db3b87172baa9df40436e368bc12cd"} Feb 28 09:52:01 crc kubenswrapper[4996]: I0228 09:52:01.460085 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerName="manila-share" containerID="cri-o://1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96" gracePeriod=30 Feb 28 09:52:01 crc kubenswrapper[4996]: I0228 09:52:01.460172 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerName="probe" containerID="cri-o://296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02" gracePeriod=30 Feb 28 09:52:01 crc kubenswrapper[4996]: I0228 09:52:01.500459 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.846452775 podStartE2EDuration="5.500430198s" podCreationTimestamp="2026-02-28 09:51:56 +0000 UTC" firstStartedPulling="2026-02-28 09:51:57.280859621 +0000 UTC m=+3080.971662472" lastFinishedPulling="2026-02-28 09:52:00.934837044 +0000 UTC m=+3084.625639895" observedRunningTime="2026-02-28 09:52:01.494863232 +0000 UTC m=+3085.185666043" watchObservedRunningTime="2026-02-28 09:52:01.500430198 +0000 UTC m=+3085.191233039" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.260391 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.325849 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-etc-machine-id\") pod \"9b2d7765-9690-404b-a33d-8358a91d8f55\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.325924 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data-custom\") pod \"9b2d7765-9690-404b-a33d-8358a91d8f55\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.325987 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-combined-ca-bundle\") pod \"9b2d7765-9690-404b-a33d-8358a91d8f55\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.326060 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-ceph\") pod \"9b2d7765-9690-404b-a33d-8358a91d8f55\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.326076 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-var-lib-manila\") pod \"9b2d7765-9690-404b-a33d-8358a91d8f55\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.326153 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln8tn\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-kube-api-access-ln8tn\") pod \"9b2d7765-9690-404b-a33d-8358a91d8f55\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.326185 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data\") pod \"9b2d7765-9690-404b-a33d-8358a91d8f55\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.326217 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-scripts\") pod \"9b2d7765-9690-404b-a33d-8358a91d8f55\" (UID: \"9b2d7765-9690-404b-a33d-8358a91d8f55\") " Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.330424 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "9b2d7765-9690-404b-a33d-8358a91d8f55" (UID: "9b2d7765-9690-404b-a33d-8358a91d8f55"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.333075 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9b2d7765-9690-404b-a33d-8358a91d8f55" (UID: "9b2d7765-9690-404b-a33d-8358a91d8f55"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.333697 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-scripts" (OuterVolumeSpecName: "scripts") pod "9b2d7765-9690-404b-a33d-8358a91d8f55" (UID: "9b2d7765-9690-404b-a33d-8358a91d8f55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.335515 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b2d7765-9690-404b-a33d-8358a91d8f55" (UID: "9b2d7765-9690-404b-a33d-8358a91d8f55"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.338413 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-ceph" (OuterVolumeSpecName: "ceph") pod "9b2d7765-9690-404b-a33d-8358a91d8f55" (UID: "9b2d7765-9690-404b-a33d-8358a91d8f55"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.340481 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-kube-api-access-ln8tn" (OuterVolumeSpecName: "kube-api-access-ln8tn") pod "9b2d7765-9690-404b-a33d-8358a91d8f55" (UID: "9b2d7765-9690-404b-a33d-8358a91d8f55"). InnerVolumeSpecName "kube-api-access-ln8tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.375889 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b2d7765-9690-404b-a33d-8358a91d8f55" (UID: "9b2d7765-9690-404b-a33d-8358a91d8f55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.428264 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.428297 4996 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.428306 4996 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.428315 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.428324 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.428331 4996 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/9b2d7765-9690-404b-a33d-8358a91d8f55-var-lib-manila\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.428339 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln8tn\" (UniqueName: \"kubernetes.io/projected/9b2d7765-9690-404b-a33d-8358a91d8f55-kube-api-access-ln8tn\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.461078 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data" (OuterVolumeSpecName: "config-data") pod "9b2d7765-9690-404b-a33d-8358a91d8f55" (UID: "9b2d7765-9690-404b-a33d-8358a91d8f55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.470158 4996 generic.go:334] "Generic (PLEG): container finished" podID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerID="296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02" exitCode=0 Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.470200 4996 generic.go:334] "Generic (PLEG): container finished" podID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerID="1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96" exitCode=1 Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.470312 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.471517 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9b2d7765-9690-404b-a33d-8358a91d8f55","Type":"ContainerDied","Data":"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02"} Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.471585 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9b2d7765-9690-404b-a33d-8358a91d8f55","Type":"ContainerDied","Data":"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96"} Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.471603 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"9b2d7765-9690-404b-a33d-8358a91d8f55","Type":"ContainerDied","Data":"5b1c8e110f2d0697c42cc4864af656944c68de900512fd81758c40872316f526"} Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.471626 4996 scope.go:117] "RemoveContainer" containerID="296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.475132 4996 generic.go:334] "Generic (PLEG): container finished" podID="16434a5d-8631-4313-83e4-8b100e810aca" containerID="bea35b779565b590d3954035610d13e158e3923ba7ac113ea4e7b3bfb316b209" exitCode=0 Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.476586 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537872-kpqnp" event={"ID":"16434a5d-8631-4313-83e4-8b100e810aca","Type":"ContainerDied","Data":"bea35b779565b590d3954035610d13e158e3923ba7ac113ea4e7b3bfb316b209"} Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.476620 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.530315 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b2d7765-9690-404b-a33d-8358a91d8f55-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.564834 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.567920 4996 scope.go:117] "RemoveContainer" containerID="1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.572053 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.602147 4996 scope.go:117] "RemoveContainer" containerID="296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02" Feb 28 09:52:02 crc kubenswrapper[4996]: E0228 09:52:02.602689 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02\": container with ID starting with 296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02 not found: ID does not exist" containerID="296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.602745 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02"} err="failed to get container status \"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02\": rpc error: code = NotFound desc = could not find container \"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02\": container with ID starting with 296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02 not found: ID does not exist" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.602778 4996 scope.go:117] "RemoveContainer" containerID="1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.602907 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:52:02 crc kubenswrapper[4996]: E0228 09:52:02.603168 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96\": container with ID starting with 1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96 not found: ID does not exist" containerID="1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.603222 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96"} err="failed to get container status \"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96\": rpc error: code = NotFound desc = could not find container \"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96\": container with ID starting with 1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96 not found: ID does not exist" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.603248 4996 scope.go:117] "RemoveContainer" containerID="296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.603699 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02"} err="failed to get container status \"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02\": rpc error: code = NotFound desc = could not find container \"296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02\": container with ID starting with 296b8afd163a02c65c4c2e6380c2834c4529b0bd9500176b2bfa707c6defdb02 not found: ID does not exist" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.603744 4996 scope.go:117] "RemoveContainer" containerID="1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.604206 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96"} err="failed to get container status \"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96\": rpc error: code = NotFound desc = could not find container \"1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96\": container with ID starting with 1ca497b873ea461833b1a587234d4b13c2a0a399a46f464c918e25bedb05fe96 not found: ID does not exist" Feb 28 09:52:02 crc kubenswrapper[4996]: E0228 09:52:02.604214 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerName="probe" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.604350 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerName="probe" Feb 28 09:52:02 crc kubenswrapper[4996]: E0228 09:52:02.604433 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerName="manila-share" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.604487 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerName="manila-share" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.604877 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerName="probe" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.604964 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" containerName="manila-share" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.605952 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.611623 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.614913 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.631782 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bd1af971-3595-4d44-98b7-8878b4d13222-ceph\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.631880 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-scripts\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.631985 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt8nj\" (UniqueName: \"kubernetes.io/projected/bd1af971-3595-4d44-98b7-8878b4d13222-kube-api-access-kt8nj\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.632027 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-config-data\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.632045 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/bd1af971-3595-4d44-98b7-8878b4d13222-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.632066 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd1af971-3595-4d44-98b7-8878b4d13222-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.632099 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.632132 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.733688 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bd1af971-3595-4d44-98b7-8878b4d13222-ceph\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.733775 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-scripts\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.733909 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt8nj\" (UniqueName: \"kubernetes.io/projected/bd1af971-3595-4d44-98b7-8878b4d13222-kube-api-access-kt8nj\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.733936 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/bd1af971-3595-4d44-98b7-8878b4d13222-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.733960 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-config-data\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.733985 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd1af971-3595-4d44-98b7-8878b4d13222-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.734041 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.734057 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/bd1af971-3595-4d44-98b7-8878b4d13222-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.734072 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bd1af971-3595-4d44-98b7-8878b4d13222-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.734071 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.741437 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.741870 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.742646 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bd1af971-3595-4d44-98b7-8878b4d13222-ceph\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.744455 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-scripts\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.752787 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd1af971-3595-4d44-98b7-8878b4d13222-config-data\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.781680 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt8nj\" (UniqueName: \"kubernetes.io/projected/bd1af971-3595-4d44-98b7-8878b4d13222-kube-api-access-kt8nj\") pod \"manila-share-share1-0\" (UID: \"bd1af971-3595-4d44-98b7-8878b4d13222\") " pod="openstack/manila-share-share1-0" Feb 28 09:52:02 crc kubenswrapper[4996]: I0228 09:52:02.928270 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.063624 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2d7765-9690-404b-a33d-8358a91d8f55" path="/var/lib/kubelet/pods/9b2d7765-9690-404b-a33d-8358a91d8f55/volumes" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.490320 4996 generic.go:334] "Generic (PLEG): container finished" podID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerID="60d2319ccdb05b07d7bf081959d91e85c0ccc32671a09a429e21c87c17279f51" exitCode=0 Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.490402 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5605f8e7-a9fc-4784-ba4c-d5bb38984650","Type":"ContainerDied","Data":"60d2319ccdb05b07d7bf081959d91e85c0ccc32671a09a429e21c87c17279f51"} Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.523073 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.684292 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.751135 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4288\" (UniqueName: \"kubernetes.io/projected/5605f8e7-a9fc-4784-ba4c-d5bb38984650-kube-api-access-f4288\") pod \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.751303 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-scripts\") pod \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.751533 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data-custom\") pod \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.751607 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-combined-ca-bundle\") pod \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.751635 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data\") pod \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.751717 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5605f8e7-a9fc-4784-ba4c-d5bb38984650-etc-machine-id\") pod \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\" (UID: \"5605f8e7-a9fc-4784-ba4c-d5bb38984650\") " Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.752335 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5605f8e7-a9fc-4784-ba4c-d5bb38984650-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5605f8e7-a9fc-4784-ba4c-d5bb38984650" (UID: "5605f8e7-a9fc-4784-ba4c-d5bb38984650"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.754913 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5605f8e7-a9fc-4784-ba4c-d5bb38984650-kube-api-access-f4288" (OuterVolumeSpecName: "kube-api-access-f4288") pod "5605f8e7-a9fc-4784-ba4c-d5bb38984650" (UID: "5605f8e7-a9fc-4784-ba4c-d5bb38984650"). InnerVolumeSpecName "kube-api-access-f4288". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.755179 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-scripts" (OuterVolumeSpecName: "scripts") pod "5605f8e7-a9fc-4784-ba4c-d5bb38984650" (UID: "5605f8e7-a9fc-4784-ba4c-d5bb38984650"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.757833 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5605f8e7-a9fc-4784-ba4c-d5bb38984650" (UID: "5605f8e7-a9fc-4784-ba4c-d5bb38984650"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.795535 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5605f8e7-a9fc-4784-ba4c-d5bb38984650" (UID: "5605f8e7-a9fc-4784-ba4c-d5bb38984650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.827914 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537872-kpqnp" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.853530 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc6jq\" (UniqueName: \"kubernetes.io/projected/16434a5d-8631-4313-83e4-8b100e810aca-kube-api-access-sc6jq\") pod \"16434a5d-8631-4313-83e4-8b100e810aca\" (UID: \"16434a5d-8631-4313-83e4-8b100e810aca\") " Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.854186 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data" (OuterVolumeSpecName: "config-data") pod "5605f8e7-a9fc-4784-ba4c-d5bb38984650" (UID: "5605f8e7-a9fc-4784-ba4c-d5bb38984650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.854252 4996 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.854274 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.854288 4996 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5605f8e7-a9fc-4784-ba4c-d5bb38984650-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.854301 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4288\" (UniqueName: \"kubernetes.io/projected/5605f8e7-a9fc-4784-ba4c-d5bb38984650-kube-api-access-f4288\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.854315 4996 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.857169 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16434a5d-8631-4313-83e4-8b100e810aca-kube-api-access-sc6jq" (OuterVolumeSpecName: "kube-api-access-sc6jq") pod "16434a5d-8631-4313-83e4-8b100e810aca" (UID: "16434a5d-8631-4313-83e4-8b100e810aca"). InnerVolumeSpecName "kube-api-access-sc6jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.955851 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5605f8e7-a9fc-4784-ba4c-d5bb38984650-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:03 crc kubenswrapper[4996]: I0228 09:52:03.955877 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc6jq\" (UniqueName: \"kubernetes.io/projected/16434a5d-8631-4313-83e4-8b100e810aca-kube-api-access-sc6jq\") on node \"crc\" DevicePath \"\"" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.525216 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bd1af971-3595-4d44-98b7-8878b4d13222","Type":"ContainerStarted","Data":"7195477ae20a73a8da793d63446bea72f540051b6a9c0de46d0894d550e5b0c2"} Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.525485 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bd1af971-3595-4d44-98b7-8878b4d13222","Type":"ContainerStarted","Data":"1b3ccbe65e7b8a2fc53c85d198e535eb4b68417465962111822b8be2b2aaf09e"} Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.529738 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"5605f8e7-a9fc-4784-ba4c-d5bb38984650","Type":"ContainerDied","Data":"9f9b757d3a81a26f2a241d5278bf8a79b17e601bbac92910ef9a411a473da668"} Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.529770 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.529803 4996 scope.go:117] "RemoveContainer" containerID="c9e289e1bb4b13aaadc2f39899d4d3c36bc4b5f12c3ef8968d162201851d09e9" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.537040 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537872-kpqnp" event={"ID":"16434a5d-8631-4313-83e4-8b100e810aca","Type":"ContainerDied","Data":"723a50b33b81733de29de8bdeba3df5fa2d6467320220580dc413605f9105d94"} Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.537074 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="723a50b33b81733de29de8bdeba3df5fa2d6467320220580dc413605f9105d94" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.537141 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537872-kpqnp" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.582799 4996 scope.go:117] "RemoveContainer" containerID="60d2319ccdb05b07d7bf081959d91e85c0ccc32671a09a429e21c87c17279f51" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.591649 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.607869 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.667683 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:52:04 crc kubenswrapper[4996]: E0228 09:52:04.668598 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerName="manila-scheduler" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.668687 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerName="manila-scheduler" Feb 28 09:52:04 crc kubenswrapper[4996]: E0228 09:52:04.668806 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerName="probe" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.668897 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerName="probe" Feb 28 09:52:04 crc kubenswrapper[4996]: E0228 09:52:04.668988 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16434a5d-8631-4313-83e4-8b100e810aca" containerName="oc" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.669121 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="16434a5d-8631-4313-83e4-8b100e810aca" containerName="oc" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.669436 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="16434a5d-8631-4313-83e4-8b100e810aca" containerName="oc" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.669538 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerName="manila-scheduler" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.669661 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" containerName="probe" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.672338 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.675872 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.686764 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.773439 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.773485 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-config-data\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.773636 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-scripts\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.773701 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.773754 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6k6h\" (UniqueName: \"kubernetes.io/projected/bb3beaae-37f6-4cbd-af32-919a3b9df37e-kube-api-access-w6k6h\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.773879 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb3beaae-37f6-4cbd-af32-919a3b9df37e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.876082 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6k6h\" (UniqueName: \"kubernetes.io/projected/bb3beaae-37f6-4cbd-af32-919a3b9df37e-kube-api-access-w6k6h\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.876175 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb3beaae-37f6-4cbd-af32-919a3b9df37e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.876234 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.876266 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-config-data\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.876279 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb3beaae-37f6-4cbd-af32-919a3b9df37e-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.876611 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-scripts\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.876696 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.882429 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-config-data\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.883324 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.893431 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.896584 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb3beaae-37f6-4cbd-af32-919a3b9df37e-scripts\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.904472 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6k6h\" (UniqueName: \"kubernetes.io/projected/bb3beaae-37f6-4cbd-af32-919a3b9df37e-kube-api-access-w6k6h\") pod \"manila-scheduler-0\" (UID: \"bb3beaae-37f6-4cbd-af32-919a3b9df37e\") " pod="openstack/manila-scheduler-0" Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.906098 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537866-s6pj7"] Feb 28 09:52:04 crc kubenswrapper[4996]: I0228 09:52:04.913375 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537866-s6pj7"] Feb 28 09:52:05 crc kubenswrapper[4996]: I0228 09:52:05.012073 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Feb 28 09:52:05 crc kubenswrapper[4996]: I0228 09:52:05.051979 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5605f8e7-a9fc-4784-ba4c-d5bb38984650" path="/var/lib/kubelet/pods/5605f8e7-a9fc-4784-ba4c-d5bb38984650/volumes" Feb 28 09:52:05 crc kubenswrapper[4996]: I0228 09:52:05.053069 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d27b47-9525-4b0a-b96e-6f4d3d8222ba" path="/var/lib/kubelet/pods/75d27b47-9525-4b0a-b96e-6f4d3d8222ba/volumes" Feb 28 09:52:05 crc kubenswrapper[4996]: I0228 09:52:05.502038 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Feb 28 09:52:05 crc kubenswrapper[4996]: W0228 09:52:05.502806 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3beaae_37f6_4cbd_af32_919a3b9df37e.slice/crio-86c225a418deace7b5c561580acc8b2f2e6931bd5209769d665a5afbe2266911 WatchSource:0}: Error finding container 86c225a418deace7b5c561580acc8b2f2e6931bd5209769d665a5afbe2266911: Status 404 returned error can't find the container with id 86c225a418deace7b5c561580acc8b2f2e6931bd5209769d665a5afbe2266911 Feb 28 09:52:05 crc kubenswrapper[4996]: I0228 09:52:05.568835 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"bd1af971-3595-4d44-98b7-8878b4d13222","Type":"ContainerStarted","Data":"ad1c5f5fe6fd4fe684e4c957c6b5e5b1f7fc6f4f0c59eb8aafdc5bbcdca092e8"} Feb 28 09:52:05 crc kubenswrapper[4996]: I0228 09:52:05.574220 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"bb3beaae-37f6-4cbd-af32-919a3b9df37e","Type":"ContainerStarted","Data":"86c225a418deace7b5c561580acc8b2f2e6931bd5209769d665a5afbe2266911"} Feb 28 09:52:05 crc kubenswrapper[4996]: I0228 09:52:05.864533 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Feb 28 09:52:05 crc kubenswrapper[4996]: I0228 09:52:05.892772 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.892749529 podStartE2EDuration="3.892749529s" podCreationTimestamp="2026-02-28 09:52:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:52:05.600924368 +0000 UTC m=+3089.291727179" watchObservedRunningTime="2026-02-28 09:52:05.892749529 +0000 UTC m=+3089.583552340" Feb 28 09:52:06 crc kubenswrapper[4996]: I0228 09:52:06.592434 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"bb3beaae-37f6-4cbd-af32-919a3b9df37e","Type":"ContainerStarted","Data":"9d0b5e565f657478cdd4ef7bea85e5dd2d395669b334f35fa95e0022f8e0b7e1"} Feb 28 09:52:06 crc kubenswrapper[4996]: I0228 09:52:06.592795 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"bb3beaae-37f6-4cbd-af32-919a3b9df37e","Type":"ContainerStarted","Data":"0d133dc418b85bd05698c56ae6945c1edc1ed4a159b2fdae6cd0c3be6f708696"} Feb 28 09:52:06 crc kubenswrapper[4996]: I0228 09:52:06.642692 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.64266857 podStartE2EDuration="2.64266857s" podCreationTimestamp="2026-02-28 09:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 09:52:06.618173804 +0000 UTC m=+3090.308976655" watchObservedRunningTime="2026-02-28 09:52:06.64266857 +0000 UTC m=+3090.333471381" Feb 28 09:52:12 crc kubenswrapper[4996]: I0228 09:52:12.034139 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:52:12 crc kubenswrapper[4996]: E0228 09:52:12.035210 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:52:12 crc kubenswrapper[4996]: I0228 09:52:12.928891 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Feb 28 09:52:15 crc kubenswrapper[4996]: I0228 09:52:15.012451 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Feb 28 09:52:24 crc kubenswrapper[4996]: I0228 09:52:24.397319 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Feb 28 09:52:26 crc kubenswrapper[4996]: I0228 09:52:26.462049 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Feb 28 09:52:26 crc kubenswrapper[4996]: I0228 09:52:26.827873 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 09:52:27 crc kubenswrapper[4996]: I0228 09:52:27.042571 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:52:27 crc kubenswrapper[4996]: E0228 09:52:27.043969 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:52:38 crc kubenswrapper[4996]: I0228 09:52:38.033636 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:52:38 crc kubenswrapper[4996]: E0228 09:52:38.034883 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:52:45 crc kubenswrapper[4996]: I0228 09:52:45.706515 4996 scope.go:117] "RemoveContainer" containerID="b7fc4b57190d9b6dfb9d19f93589596b1d466129cff47366a3253d43226b12a6" Feb 28 09:52:53 crc kubenswrapper[4996]: I0228 09:52:53.033305 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:52:53 crc kubenswrapper[4996]: E0228 09:52:53.034686 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:53:04 crc kubenswrapper[4996]: I0228 09:53:04.034034 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:53:04 crc kubenswrapper[4996]: E0228 09:53:04.034883 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:53:16 crc kubenswrapper[4996]: I0228 09:53:16.034156 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:53:16 crc kubenswrapper[4996]: E0228 09:53:16.035255 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.594115 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.596135 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.599487 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.599506 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.599706 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-98kdl" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.600050 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.614295 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.614356 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.614440 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.619667 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716086 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716158 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716200 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716244 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716311 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716351 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716378 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716404 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716444 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b7z4\" (UniqueName: \"kubernetes.io/projected/5f62e7a0-18c6-441e-8804-4760a6dd1efc-kube-api-access-8b7z4\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.716480 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.718370 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.718869 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.722993 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.818438 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b7z4\" (UniqueName: \"kubernetes.io/projected/5f62e7a0-18c6-441e-8804-4760a6dd1efc-kube-api-access-8b7z4\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.818495 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.818586 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.818654 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.818695 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.818716 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.818732 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.818952 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.819208 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.819545 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.822233 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.823599 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.825592 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.836135 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b7z4\" (UniqueName: \"kubernetes.io/projected/5f62e7a0-18c6-441e-8804-4760a6dd1efc-kube-api-access-8b7z4\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.851073 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:17 crc kubenswrapper[4996]: I0228 09:53:17.924868 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Feb 28 09:53:18 crc kubenswrapper[4996]: I0228 09:53:18.459757 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Feb 28 09:53:18 crc kubenswrapper[4996]: I0228 09:53:18.460970 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:53:19 crc kubenswrapper[4996]: I0228 09:53:19.362555 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"5f62e7a0-18c6-441e-8804-4760a6dd1efc","Type":"ContainerStarted","Data":"69f91a25201472323c99f67cc9e825199f9c0e2884f4913a0c40c6b39f1af2f9"} Feb 28 09:53:30 crc kubenswrapper[4996]: I0228 09:53:30.033766 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:53:30 crc kubenswrapper[4996]: E0228 09:53:30.035038 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:53:45 crc kubenswrapper[4996]: I0228 09:53:45.034269 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:53:45 crc kubenswrapper[4996]: E0228 09:53:45.035306 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:53:53 crc kubenswrapper[4996]: E0228 09:53:53.891442 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 28 09:53:53 crc kubenswrapper[4996]: E0228 09:53:53.892124 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8b7z4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-full_openstack(5f62e7a0-18c6-441e-8804-4760a6dd1efc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 09:53:53 crc kubenswrapper[4996]: E0228 09:53:53.893316 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="5f62e7a0-18c6-441e-8804-4760a6dd1efc" Feb 28 09:53:54 crc kubenswrapper[4996]: E0228 09:53:54.728472 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="5f62e7a0-18c6-441e-8804-4760a6dd1efc" Feb 28 09:53:59 crc kubenswrapper[4996]: I0228 09:53:59.033486 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:53:59 crc kubenswrapper[4996]: E0228 09:53:59.034578 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.155257 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537874-4dnmb"] Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.157250 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537874-4dnmb" Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.159183 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.159728 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.160098 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.186848 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537874-4dnmb"] Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.229565 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8b5b\" (UniqueName: \"kubernetes.io/projected/c57accc5-b1c1-42ea-879e-de23414d5c68-kube-api-access-l8b5b\") pod \"auto-csr-approver-29537874-4dnmb\" (UID: \"c57accc5-b1c1-42ea-879e-de23414d5c68\") " pod="openshift-infra/auto-csr-approver-29537874-4dnmb" Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.332188 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8b5b\" (UniqueName: \"kubernetes.io/projected/c57accc5-b1c1-42ea-879e-de23414d5c68-kube-api-access-l8b5b\") pod \"auto-csr-approver-29537874-4dnmb\" (UID: \"c57accc5-b1c1-42ea-879e-de23414d5c68\") " pod="openshift-infra/auto-csr-approver-29537874-4dnmb" Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.361169 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8b5b\" (UniqueName: \"kubernetes.io/projected/c57accc5-b1c1-42ea-879e-de23414d5c68-kube-api-access-l8b5b\") pod \"auto-csr-approver-29537874-4dnmb\" (UID: \"c57accc5-b1c1-42ea-879e-de23414d5c68\") " pod="openshift-infra/auto-csr-approver-29537874-4dnmb" Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.487116 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537874-4dnmb" Feb 28 09:54:00 crc kubenswrapper[4996]: W0228 09:54:00.989622 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc57accc5_b1c1_42ea_879e_de23414d5c68.slice/crio-251aca33e1d9811208ee400797e50bc6b79728d5260c6f2325f3d55089d4fd8c WatchSource:0}: Error finding container 251aca33e1d9811208ee400797e50bc6b79728d5260c6f2325f3d55089d4fd8c: Status 404 returned error can't find the container with id 251aca33e1d9811208ee400797e50bc6b79728d5260c6f2325f3d55089d4fd8c Feb 28 09:54:00 crc kubenswrapper[4996]: I0228 09:54:00.991937 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537874-4dnmb"] Feb 28 09:54:01 crc kubenswrapper[4996]: I0228 09:54:01.836047 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537874-4dnmb" event={"ID":"c57accc5-b1c1-42ea-879e-de23414d5c68","Type":"ContainerStarted","Data":"251aca33e1d9811208ee400797e50bc6b79728d5260c6f2325f3d55089d4fd8c"} Feb 28 09:54:02 crc kubenswrapper[4996]: I0228 09:54:02.847361 4996 generic.go:334] "Generic (PLEG): container finished" podID="c57accc5-b1c1-42ea-879e-de23414d5c68" containerID="47fffa18b047164770b7b2cd070c6070dfe48129fc140c273cfe64fc3c3cf968" exitCode=0 Feb 28 09:54:02 crc kubenswrapper[4996]: I0228 09:54:02.847400 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537874-4dnmb" event={"ID":"c57accc5-b1c1-42ea-879e-de23414d5c68","Type":"ContainerDied","Data":"47fffa18b047164770b7b2cd070c6070dfe48129fc140c273cfe64fc3c3cf968"} Feb 28 09:54:04 crc kubenswrapper[4996]: I0228 09:54:04.309687 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537874-4dnmb" Feb 28 09:54:04 crc kubenswrapper[4996]: I0228 09:54:04.416074 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8b5b\" (UniqueName: \"kubernetes.io/projected/c57accc5-b1c1-42ea-879e-de23414d5c68-kube-api-access-l8b5b\") pod \"c57accc5-b1c1-42ea-879e-de23414d5c68\" (UID: \"c57accc5-b1c1-42ea-879e-de23414d5c68\") " Feb 28 09:54:04 crc kubenswrapper[4996]: I0228 09:54:04.422914 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c57accc5-b1c1-42ea-879e-de23414d5c68-kube-api-access-l8b5b" (OuterVolumeSpecName: "kube-api-access-l8b5b") pod "c57accc5-b1c1-42ea-879e-de23414d5c68" (UID: "c57accc5-b1c1-42ea-879e-de23414d5c68"). InnerVolumeSpecName "kube-api-access-l8b5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:54:04 crc kubenswrapper[4996]: I0228 09:54:04.518974 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8b5b\" (UniqueName: \"kubernetes.io/projected/c57accc5-b1c1-42ea-879e-de23414d5c68-kube-api-access-l8b5b\") on node \"crc\" DevicePath \"\"" Feb 28 09:54:04 crc kubenswrapper[4996]: I0228 09:54:04.886067 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537874-4dnmb" event={"ID":"c57accc5-b1c1-42ea-879e-de23414d5c68","Type":"ContainerDied","Data":"251aca33e1d9811208ee400797e50bc6b79728d5260c6f2325f3d55089d4fd8c"} Feb 28 09:54:04 crc kubenswrapper[4996]: I0228 09:54:04.886508 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251aca33e1d9811208ee400797e50bc6b79728d5260c6f2325f3d55089d4fd8c" Feb 28 09:54:04 crc kubenswrapper[4996]: I0228 09:54:04.890266 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537874-4dnmb" Feb 28 09:54:05 crc kubenswrapper[4996]: I0228 09:54:05.398303 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537868-rfx5n"] Feb 28 09:54:05 crc kubenswrapper[4996]: I0228 09:54:05.406224 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537868-rfx5n"] Feb 28 09:54:07 crc kubenswrapper[4996]: I0228 09:54:07.055180 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33251059-ef77-4e71-9779-408745e6ac20" path="/var/lib/kubelet/pods/33251059-ef77-4e71-9779-408745e6ac20/volumes" Feb 28 09:54:07 crc kubenswrapper[4996]: I0228 09:54:07.517909 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 28 09:54:08 crc kubenswrapper[4996]: I0228 09:54:08.944196 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"5f62e7a0-18c6-441e-8804-4760a6dd1efc","Type":"ContainerStarted","Data":"0a84d72e98f18fee94ce465c4857e3cd008738d0f98edd0efdf028012cba9110"} Feb 28 09:54:08 crc kubenswrapper[4996]: I0228 09:54:08.967283 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-full" podStartSLOduration=3.91387074 podStartE2EDuration="52.967258034s" podCreationTimestamp="2026-02-28 09:53:16 +0000 UTC" firstStartedPulling="2026-02-28 09:53:18.460771107 +0000 UTC m=+3162.151573918" lastFinishedPulling="2026-02-28 09:54:07.514158401 +0000 UTC m=+3211.204961212" observedRunningTime="2026-02-28 09:54:08.965581083 +0000 UTC m=+3212.656383974" watchObservedRunningTime="2026-02-28 09:54:08.967258034 +0000 UTC m=+3212.658060885" Feb 28 09:54:12 crc kubenswrapper[4996]: I0228 09:54:12.032726 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:54:12 crc kubenswrapper[4996]: E0228 09:54:12.033309 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:54:23 crc kubenswrapper[4996]: I0228 09:54:23.033326 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:54:23 crc kubenswrapper[4996]: E0228 09:54:23.034243 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:54:34 crc kubenswrapper[4996]: I0228 09:54:34.033478 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:54:34 crc kubenswrapper[4996]: E0228 09:54:34.034219 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:54:46 crc kubenswrapper[4996]: I0228 09:54:46.160559 4996 scope.go:117] "RemoveContainer" containerID="64cb7c11c61ec81c4797f5682c3ac5d693e2e379010bf65696e52b081ff26061" Feb 28 09:54:49 crc kubenswrapper[4996]: I0228 09:54:49.033386 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:54:49 crc kubenswrapper[4996]: E0228 09:54:49.034300 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:55:04 crc kubenswrapper[4996]: I0228 09:55:04.032843 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:55:04 crc kubenswrapper[4996]: E0228 09:55:04.033649 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 09:55:19 crc kubenswrapper[4996]: I0228 09:55:19.033520 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:55:19 crc kubenswrapper[4996]: I0228 09:55:19.803474 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"6eb116081369391456f05d941d1318d5b3658ce08cbf4ef14c5126d1f232921f"} Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.192075 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537876-xvjg9"] Feb 28 09:56:00 crc kubenswrapper[4996]: E0228 09:56:00.193123 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c57accc5-b1c1-42ea-879e-de23414d5c68" containerName="oc" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.193139 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c57accc5-b1c1-42ea-879e-de23414d5c68" containerName="oc" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.193331 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c57accc5-b1c1-42ea-879e-de23414d5c68" containerName="oc" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.194025 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.195893 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.197441 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.200118 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.208639 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537876-xvjg9"] Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.263732 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qwbf\" (UniqueName: \"kubernetes.io/projected/23c2ecae-bac9-43ef-8f00-826a2cb721f1-kube-api-access-7qwbf\") pod \"auto-csr-approver-29537876-xvjg9\" (UID: \"23c2ecae-bac9-43ef-8f00-826a2cb721f1\") " pod="openshift-infra/auto-csr-approver-29537876-xvjg9" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.365663 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qwbf\" (UniqueName: \"kubernetes.io/projected/23c2ecae-bac9-43ef-8f00-826a2cb721f1-kube-api-access-7qwbf\") pod \"auto-csr-approver-29537876-xvjg9\" (UID: \"23c2ecae-bac9-43ef-8f00-826a2cb721f1\") " pod="openshift-infra/auto-csr-approver-29537876-xvjg9" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.397426 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qwbf\" (UniqueName: \"kubernetes.io/projected/23c2ecae-bac9-43ef-8f00-826a2cb721f1-kube-api-access-7qwbf\") pod \"auto-csr-approver-29537876-xvjg9\" (UID: \"23c2ecae-bac9-43ef-8f00-826a2cb721f1\") " pod="openshift-infra/auto-csr-approver-29537876-xvjg9" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.513823 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" Feb 28 09:56:00 crc kubenswrapper[4996]: I0228 09:56:00.974973 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537876-xvjg9"] Feb 28 09:56:01 crc kubenswrapper[4996]: I0228 09:56:01.199361 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" event={"ID":"23c2ecae-bac9-43ef-8f00-826a2cb721f1","Type":"ContainerStarted","Data":"24fdb76afde68763b492b2c711b0da5241f02ab9e522e29be813e8b118ba19a2"} Feb 28 09:56:02 crc kubenswrapper[4996]: I0228 09:56:02.213241 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" event={"ID":"23c2ecae-bac9-43ef-8f00-826a2cb721f1","Type":"ContainerStarted","Data":"36313123119f69d49600b5140a48b083eae9ff9efae9cfbfab6a65f6ebd554b2"} Feb 28 09:56:02 crc kubenswrapper[4996]: I0228 09:56:02.231264 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" podStartSLOduration=1.370808705 podStartE2EDuration="2.23123698s" podCreationTimestamp="2026-02-28 09:56:00 +0000 UTC" firstStartedPulling="2026-02-28 09:56:00.989448481 +0000 UTC m=+3324.680251292" lastFinishedPulling="2026-02-28 09:56:01.849876756 +0000 UTC m=+3325.540679567" observedRunningTime="2026-02-28 09:56:02.226162326 +0000 UTC m=+3325.916965177" watchObservedRunningTime="2026-02-28 09:56:02.23123698 +0000 UTC m=+3325.922039831" Feb 28 09:56:03 crc kubenswrapper[4996]: I0228 09:56:03.236050 4996 generic.go:334] "Generic (PLEG): container finished" podID="23c2ecae-bac9-43ef-8f00-826a2cb721f1" containerID="36313123119f69d49600b5140a48b083eae9ff9efae9cfbfab6a65f6ebd554b2" exitCode=0 Feb 28 09:56:03 crc kubenswrapper[4996]: I0228 09:56:03.238983 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" event={"ID":"23c2ecae-bac9-43ef-8f00-826a2cb721f1","Type":"ContainerDied","Data":"36313123119f69d49600b5140a48b083eae9ff9efae9cfbfab6a65f6ebd554b2"} Feb 28 09:56:04 crc kubenswrapper[4996]: I0228 09:56:04.652221 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" Feb 28 09:56:04 crc kubenswrapper[4996]: I0228 09:56:04.773929 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qwbf\" (UniqueName: \"kubernetes.io/projected/23c2ecae-bac9-43ef-8f00-826a2cb721f1-kube-api-access-7qwbf\") pod \"23c2ecae-bac9-43ef-8f00-826a2cb721f1\" (UID: \"23c2ecae-bac9-43ef-8f00-826a2cb721f1\") " Feb 28 09:56:04 crc kubenswrapper[4996]: I0228 09:56:04.798289 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c2ecae-bac9-43ef-8f00-826a2cb721f1-kube-api-access-7qwbf" (OuterVolumeSpecName: "kube-api-access-7qwbf") pod "23c2ecae-bac9-43ef-8f00-826a2cb721f1" (UID: "23c2ecae-bac9-43ef-8f00-826a2cb721f1"). InnerVolumeSpecName "kube-api-access-7qwbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:56:04 crc kubenswrapper[4996]: I0228 09:56:04.876106 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qwbf\" (UniqueName: \"kubernetes.io/projected/23c2ecae-bac9-43ef-8f00-826a2cb721f1-kube-api-access-7qwbf\") on node \"crc\" DevicePath \"\"" Feb 28 09:56:05 crc kubenswrapper[4996]: I0228 09:56:05.262385 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" event={"ID":"23c2ecae-bac9-43ef-8f00-826a2cb721f1","Type":"ContainerDied","Data":"24fdb76afde68763b492b2c711b0da5241f02ab9e522e29be813e8b118ba19a2"} Feb 28 09:56:05 crc kubenswrapper[4996]: I0228 09:56:05.262735 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24fdb76afde68763b492b2c711b0da5241f02ab9e522e29be813e8b118ba19a2" Feb 28 09:56:05 crc kubenswrapper[4996]: I0228 09:56:05.262569 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537876-xvjg9" Feb 28 09:56:05 crc kubenswrapper[4996]: I0228 09:56:05.301527 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537870-m7jr8"] Feb 28 09:56:05 crc kubenswrapper[4996]: I0228 09:56:05.310938 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537870-m7jr8"] Feb 28 09:56:07 crc kubenswrapper[4996]: I0228 09:56:07.048594 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb6e9c9b-6236-4fb1-b30b-607678ad604a" path="/var/lib/kubelet/pods/fb6e9c9b-6236-4fb1-b30b-607678ad604a/volumes" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.349198 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jdl42"] Feb 28 09:56:19 crc kubenswrapper[4996]: E0228 09:56:19.350196 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c2ecae-bac9-43ef-8f00-826a2cb721f1" containerName="oc" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.350209 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c2ecae-bac9-43ef-8f00-826a2cb721f1" containerName="oc" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.350380 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c2ecae-bac9-43ef-8f00-826a2cb721f1" containerName="oc" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.356023 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.363323 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdl42"] Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.473533 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-utilities\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.473717 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsn9v\" (UniqueName: \"kubernetes.io/projected/e25b16f9-eace-46b2-996e-da66e530f360-kube-api-access-bsn9v\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.473803 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-catalog-content\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.575643 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-utilities\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.575714 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsn9v\" (UniqueName: \"kubernetes.io/projected/e25b16f9-eace-46b2-996e-da66e530f360-kube-api-access-bsn9v\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.575741 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-catalog-content\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.576227 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-utilities\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.576300 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-catalog-content\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.598381 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsn9v\" (UniqueName: \"kubernetes.io/projected/e25b16f9-eace-46b2-996e-da66e530f360-kube-api-access-bsn9v\") pod \"certified-operators-jdl42\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:19 crc kubenswrapper[4996]: I0228 09:56:19.698037 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:20 crc kubenswrapper[4996]: I0228 09:56:20.191359 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jdl42"] Feb 28 09:56:20 crc kubenswrapper[4996]: I0228 09:56:20.421077 4996 generic.go:334] "Generic (PLEG): container finished" podID="e25b16f9-eace-46b2-996e-da66e530f360" containerID="fe9414fd06dfe77ce85abdfc47f440b59140508949c892531f30d44409ca6e6b" exitCode=0 Feb 28 09:56:20 crc kubenswrapper[4996]: I0228 09:56:20.421134 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdl42" event={"ID":"e25b16f9-eace-46b2-996e-da66e530f360","Type":"ContainerDied","Data":"fe9414fd06dfe77ce85abdfc47f440b59140508949c892531f30d44409ca6e6b"} Feb 28 09:56:20 crc kubenswrapper[4996]: I0228 09:56:20.421159 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdl42" event={"ID":"e25b16f9-eace-46b2-996e-da66e530f360","Type":"ContainerStarted","Data":"5ce7bf60edb95fbe39dc12e16dee399eb19c4043710526e7327f1bc8f6bb047f"} Feb 28 09:56:22 crc kubenswrapper[4996]: I0228 09:56:22.446757 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdl42" event={"ID":"e25b16f9-eace-46b2-996e-da66e530f360","Type":"ContainerStarted","Data":"9f3a72f282b23cd79462c09747c6db38028bf88c27001bdd2ae7a0c29677ed9a"} Feb 28 09:56:23 crc kubenswrapper[4996]: I0228 09:56:23.459076 4996 generic.go:334] "Generic (PLEG): container finished" podID="e25b16f9-eace-46b2-996e-da66e530f360" containerID="9f3a72f282b23cd79462c09747c6db38028bf88c27001bdd2ae7a0c29677ed9a" exitCode=0 Feb 28 09:56:23 crc kubenswrapper[4996]: I0228 09:56:23.459483 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdl42" event={"ID":"e25b16f9-eace-46b2-996e-da66e530f360","Type":"ContainerDied","Data":"9f3a72f282b23cd79462c09747c6db38028bf88c27001bdd2ae7a0c29677ed9a"} Feb 28 09:56:24 crc kubenswrapper[4996]: I0228 09:56:24.474799 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdl42" event={"ID":"e25b16f9-eace-46b2-996e-da66e530f360","Type":"ContainerStarted","Data":"d31c9940f4bec0179d0bce416e1fbc90b94ea029228d49f660372986c0fb5539"} Feb 28 09:56:24 crc kubenswrapper[4996]: I0228 09:56:24.508488 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jdl42" podStartSLOduration=2.048382856 podStartE2EDuration="5.508469304s" podCreationTimestamp="2026-02-28 09:56:19 +0000 UTC" firstStartedPulling="2026-02-28 09:56:20.422685165 +0000 UTC m=+3344.113487986" lastFinishedPulling="2026-02-28 09:56:23.882771623 +0000 UTC m=+3347.573574434" observedRunningTime="2026-02-28 09:56:24.501788962 +0000 UTC m=+3348.192591783" watchObservedRunningTime="2026-02-28 09:56:24.508469304 +0000 UTC m=+3348.199272115" Feb 28 09:56:29 crc kubenswrapper[4996]: I0228 09:56:29.698793 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:29 crc kubenswrapper[4996]: I0228 09:56:29.699527 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:29 crc kubenswrapper[4996]: I0228 09:56:29.753689 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:30 crc kubenswrapper[4996]: I0228 09:56:30.577142 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:30 crc kubenswrapper[4996]: I0228 09:56:30.640429 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdl42"] Feb 28 09:56:32 crc kubenswrapper[4996]: I0228 09:56:32.542689 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jdl42" podUID="e25b16f9-eace-46b2-996e-da66e530f360" containerName="registry-server" containerID="cri-o://d31c9940f4bec0179d0bce416e1fbc90b94ea029228d49f660372986c0fb5539" gracePeriod=2 Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.554799 4996 generic.go:334] "Generic (PLEG): container finished" podID="e25b16f9-eace-46b2-996e-da66e530f360" containerID="d31c9940f4bec0179d0bce416e1fbc90b94ea029228d49f660372986c0fb5539" exitCode=0 Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.554872 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdl42" event={"ID":"e25b16f9-eace-46b2-996e-da66e530f360","Type":"ContainerDied","Data":"d31c9940f4bec0179d0bce416e1fbc90b94ea029228d49f660372986c0fb5539"} Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.645512 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.782028 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-utilities\") pod \"e25b16f9-eace-46b2-996e-da66e530f360\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.782176 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-catalog-content\") pod \"e25b16f9-eace-46b2-996e-da66e530f360\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.782273 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsn9v\" (UniqueName: \"kubernetes.io/projected/e25b16f9-eace-46b2-996e-da66e530f360-kube-api-access-bsn9v\") pod \"e25b16f9-eace-46b2-996e-da66e530f360\" (UID: \"e25b16f9-eace-46b2-996e-da66e530f360\") " Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.784218 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-utilities" (OuterVolumeSpecName: "utilities") pod "e25b16f9-eace-46b2-996e-da66e530f360" (UID: "e25b16f9-eace-46b2-996e-da66e530f360"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.790312 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25b16f9-eace-46b2-996e-da66e530f360-kube-api-access-bsn9v" (OuterVolumeSpecName: "kube-api-access-bsn9v") pod "e25b16f9-eace-46b2-996e-da66e530f360" (UID: "e25b16f9-eace-46b2-996e-da66e530f360"). InnerVolumeSpecName "kube-api-access-bsn9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.837347 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e25b16f9-eace-46b2-996e-da66e530f360" (UID: "e25b16f9-eace-46b2-996e-da66e530f360"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.884566 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.884598 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsn9v\" (UniqueName: \"kubernetes.io/projected/e25b16f9-eace-46b2-996e-da66e530f360-kube-api-access-bsn9v\") on node \"crc\" DevicePath \"\"" Feb 28 09:56:33 crc kubenswrapper[4996]: I0228 09:56:33.884609 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e25b16f9-eace-46b2-996e-da66e530f360-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:56:34 crc kubenswrapper[4996]: I0228 09:56:34.583439 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jdl42" event={"ID":"e25b16f9-eace-46b2-996e-da66e530f360","Type":"ContainerDied","Data":"5ce7bf60edb95fbe39dc12e16dee399eb19c4043710526e7327f1bc8f6bb047f"} Feb 28 09:56:34 crc kubenswrapper[4996]: I0228 09:56:34.583511 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jdl42" Feb 28 09:56:34 crc kubenswrapper[4996]: I0228 09:56:34.583860 4996 scope.go:117] "RemoveContainer" containerID="d31c9940f4bec0179d0bce416e1fbc90b94ea029228d49f660372986c0fb5539" Feb 28 09:56:34 crc kubenswrapper[4996]: I0228 09:56:34.626142 4996 scope.go:117] "RemoveContainer" containerID="9f3a72f282b23cd79462c09747c6db38028bf88c27001bdd2ae7a0c29677ed9a" Feb 28 09:56:34 crc kubenswrapper[4996]: I0228 09:56:34.638738 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jdl42"] Feb 28 09:56:34 crc kubenswrapper[4996]: I0228 09:56:34.655190 4996 scope.go:117] "RemoveContainer" containerID="fe9414fd06dfe77ce85abdfc47f440b59140508949c892531f30d44409ca6e6b" Feb 28 09:56:34 crc kubenswrapper[4996]: I0228 09:56:34.656613 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jdl42"] Feb 28 09:56:35 crc kubenswrapper[4996]: I0228 09:56:35.047509 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25b16f9-eace-46b2-996e-da66e530f360" path="/var/lib/kubelet/pods/e25b16f9-eace-46b2-996e-da66e530f360/volumes" Feb 28 09:56:46 crc kubenswrapper[4996]: I0228 09:56:46.298946 4996 scope.go:117] "RemoveContainer" containerID="15a0bf747a4bf1f74adb22d61c035c7df94c367207a99fb1c8f043c8d9206922" Feb 28 09:57:42 crc kubenswrapper[4996]: I0228 09:57:42.248492 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:57:42 crc kubenswrapper[4996]: I0228 09:57:42.249111 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.167768 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537878-4qnfr"] Feb 28 09:58:00 crc kubenswrapper[4996]: E0228 09:58:00.169072 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25b16f9-eace-46b2-996e-da66e530f360" containerName="registry-server" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.169098 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25b16f9-eace-46b2-996e-da66e530f360" containerName="registry-server" Feb 28 09:58:00 crc kubenswrapper[4996]: E0228 09:58:00.169127 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25b16f9-eace-46b2-996e-da66e530f360" containerName="extract-content" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.169144 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25b16f9-eace-46b2-996e-da66e530f360" containerName="extract-content" Feb 28 09:58:00 crc kubenswrapper[4996]: E0228 09:58:00.169177 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25b16f9-eace-46b2-996e-da66e530f360" containerName="extract-utilities" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.169191 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25b16f9-eace-46b2-996e-da66e530f360" containerName="extract-utilities" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.169542 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25b16f9-eace-46b2-996e-da66e530f360" containerName="registry-server" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.170715 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537878-4qnfr" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.173165 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.173460 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.175311 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.182816 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537878-4qnfr"] Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.277294 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dsjg\" (UniqueName: \"kubernetes.io/projected/482cb3db-191b-4fee-b655-e74426932bdf-kube-api-access-7dsjg\") pod \"auto-csr-approver-29537878-4qnfr\" (UID: \"482cb3db-191b-4fee-b655-e74426932bdf\") " pod="openshift-infra/auto-csr-approver-29537878-4qnfr" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.379555 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dsjg\" (UniqueName: \"kubernetes.io/projected/482cb3db-191b-4fee-b655-e74426932bdf-kube-api-access-7dsjg\") pod \"auto-csr-approver-29537878-4qnfr\" (UID: \"482cb3db-191b-4fee-b655-e74426932bdf\") " pod="openshift-infra/auto-csr-approver-29537878-4qnfr" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.411708 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dsjg\" (UniqueName: \"kubernetes.io/projected/482cb3db-191b-4fee-b655-e74426932bdf-kube-api-access-7dsjg\") pod \"auto-csr-approver-29537878-4qnfr\" (UID: \"482cb3db-191b-4fee-b655-e74426932bdf\") " pod="openshift-infra/auto-csr-approver-29537878-4qnfr" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.495758 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537878-4qnfr" Feb 28 09:58:00 crc kubenswrapper[4996]: I0228 09:58:00.971491 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537878-4qnfr"] Feb 28 09:58:01 crc kubenswrapper[4996]: I0228 09:58:01.570920 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537878-4qnfr" event={"ID":"482cb3db-191b-4fee-b655-e74426932bdf","Type":"ContainerStarted","Data":"9afcb4f0999d261656ea89e2aa86dc63ee82f14e940ded32a97efd1b4a3a44d0"} Feb 28 09:58:02 crc kubenswrapper[4996]: I0228 09:58:02.587136 4996 generic.go:334] "Generic (PLEG): container finished" podID="482cb3db-191b-4fee-b655-e74426932bdf" containerID="348d1606179f0d44ad3599dfbbf0bd1479bd3895c578492a33e0665dfc9d6e4d" exitCode=0 Feb 28 09:58:02 crc kubenswrapper[4996]: I0228 09:58:02.587207 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537878-4qnfr" event={"ID":"482cb3db-191b-4fee-b655-e74426932bdf","Type":"ContainerDied","Data":"348d1606179f0d44ad3599dfbbf0bd1479bd3895c578492a33e0665dfc9d6e4d"} Feb 28 09:58:03 crc kubenswrapper[4996]: I0228 09:58:03.999464 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537878-4qnfr" Feb 28 09:58:04 crc kubenswrapper[4996]: I0228 09:58:04.159256 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dsjg\" (UniqueName: \"kubernetes.io/projected/482cb3db-191b-4fee-b655-e74426932bdf-kube-api-access-7dsjg\") pod \"482cb3db-191b-4fee-b655-e74426932bdf\" (UID: \"482cb3db-191b-4fee-b655-e74426932bdf\") " Feb 28 09:58:04 crc kubenswrapper[4996]: I0228 09:58:04.166311 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482cb3db-191b-4fee-b655-e74426932bdf-kube-api-access-7dsjg" (OuterVolumeSpecName: "kube-api-access-7dsjg") pod "482cb3db-191b-4fee-b655-e74426932bdf" (UID: "482cb3db-191b-4fee-b655-e74426932bdf"). InnerVolumeSpecName "kube-api-access-7dsjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:58:04 crc kubenswrapper[4996]: I0228 09:58:04.263553 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dsjg\" (UniqueName: \"kubernetes.io/projected/482cb3db-191b-4fee-b655-e74426932bdf-kube-api-access-7dsjg\") on node \"crc\" DevicePath \"\"" Feb 28 09:58:04 crc kubenswrapper[4996]: I0228 09:58:04.609758 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537878-4qnfr" event={"ID":"482cb3db-191b-4fee-b655-e74426932bdf","Type":"ContainerDied","Data":"9afcb4f0999d261656ea89e2aa86dc63ee82f14e940ded32a97efd1b4a3a44d0"} Feb 28 09:58:04 crc kubenswrapper[4996]: I0228 09:58:04.610226 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9afcb4f0999d261656ea89e2aa86dc63ee82f14e940ded32a97efd1b4a3a44d0" Feb 28 09:58:04 crc kubenswrapper[4996]: I0228 09:58:04.609860 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537878-4qnfr" Feb 28 09:58:05 crc kubenswrapper[4996]: I0228 09:58:05.154681 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537872-kpqnp"] Feb 28 09:58:05 crc kubenswrapper[4996]: I0228 09:58:05.167957 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537872-kpqnp"] Feb 28 09:58:07 crc kubenswrapper[4996]: I0228 09:58:07.052525 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16434a5d-8631-4313-83e4-8b100e810aca" path="/var/lib/kubelet/pods/16434a5d-8631-4313-83e4-8b100e810aca/volumes" Feb 28 09:58:12 crc kubenswrapper[4996]: I0228 09:58:12.249272 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:58:12 crc kubenswrapper[4996]: I0228 09:58:12.249920 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:58:42 crc kubenswrapper[4996]: I0228 09:58:42.248564 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 09:58:42 crc kubenswrapper[4996]: I0228 09:58:42.249104 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 09:58:42 crc kubenswrapper[4996]: I0228 09:58:42.249174 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 09:58:42 crc kubenswrapper[4996]: I0228 09:58:42.249927 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6eb116081369391456f05d941d1318d5b3658ce08cbf4ef14c5126d1f232921f"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 09:58:42 crc kubenswrapper[4996]: I0228 09:58:42.249978 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://6eb116081369391456f05d941d1318d5b3658ce08cbf4ef14c5126d1f232921f" gracePeriod=600 Feb 28 09:58:42 crc kubenswrapper[4996]: I0228 09:58:42.963302 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="6eb116081369391456f05d941d1318d5b3658ce08cbf4ef14c5126d1f232921f" exitCode=0 Feb 28 09:58:42 crc kubenswrapper[4996]: I0228 09:58:42.963337 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"6eb116081369391456f05d941d1318d5b3658ce08cbf4ef14c5126d1f232921f"} Feb 28 09:58:42 crc kubenswrapper[4996]: I0228 09:58:42.963565 4996 scope.go:117] "RemoveContainer" containerID="f05b39b95aac2fb0766f40d962cc92743de23511aed014f99ba33a9ab3111c50" Feb 28 09:58:44 crc kubenswrapper[4996]: I0228 09:58:44.005746 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a"} Feb 28 09:58:46 crc kubenswrapper[4996]: I0228 09:58:46.445172 4996 scope.go:117] "RemoveContainer" containerID="bea35b779565b590d3954035610d13e158e3923ba7ac113ea4e7b3bfb316b209" Feb 28 09:59:26 crc kubenswrapper[4996]: I0228 09:59:26.933106 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cbb75"] Feb 28 09:59:26 crc kubenswrapper[4996]: E0228 09:59:26.935061 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482cb3db-191b-4fee-b655-e74426932bdf" containerName="oc" Feb 28 09:59:26 crc kubenswrapper[4996]: I0228 09:59:26.935148 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="482cb3db-191b-4fee-b655-e74426932bdf" containerName="oc" Feb 28 09:59:26 crc kubenswrapper[4996]: I0228 09:59:26.935366 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="482cb3db-191b-4fee-b655-e74426932bdf" containerName="oc" Feb 28 09:59:26 crc kubenswrapper[4996]: I0228 09:59:26.941043 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:26 crc kubenswrapper[4996]: I0228 09:59:26.951357 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbb75"] Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.016194 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8bw7\" (UniqueName: \"kubernetes.io/projected/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-kube-api-access-f8bw7\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.016467 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-utilities\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.016839 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-catalog-content\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.119097 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-utilities\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.119300 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-catalog-content\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.119337 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8bw7\" (UniqueName: \"kubernetes.io/projected/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-kube-api-access-f8bw7\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.119710 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-utilities\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.120706 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-catalog-content\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.144324 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8bw7\" (UniqueName: \"kubernetes.io/projected/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-kube-api-access-f8bw7\") pod \"redhat-marketplace-cbb75\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.261406 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:27 crc kubenswrapper[4996]: I0228 09:59:27.758305 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbb75"] Feb 28 09:59:28 crc kubenswrapper[4996]: I0228 09:59:28.450182 4996 generic.go:334] "Generic (PLEG): container finished" podID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerID="d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3" exitCode=0 Feb 28 09:59:28 crc kubenswrapper[4996]: I0228 09:59:28.450273 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbb75" event={"ID":"221739d1-3dab-41ed-9d61-e7cfb6fe4c84","Type":"ContainerDied","Data":"d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3"} Feb 28 09:59:28 crc kubenswrapper[4996]: I0228 09:59:28.450520 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbb75" event={"ID":"221739d1-3dab-41ed-9d61-e7cfb6fe4c84","Type":"ContainerStarted","Data":"e0771fadcb045f3ea758e27f095cc381f54810cdbbff15433c3e8297486fe426"} Feb 28 09:59:28 crc kubenswrapper[4996]: I0228 09:59:28.452687 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 09:59:29 crc kubenswrapper[4996]: I0228 09:59:29.460495 4996 generic.go:334] "Generic (PLEG): container finished" podID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerID="3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0" exitCode=0 Feb 28 09:59:29 crc kubenswrapper[4996]: I0228 09:59:29.460533 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbb75" event={"ID":"221739d1-3dab-41ed-9d61-e7cfb6fe4c84","Type":"ContainerDied","Data":"3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0"} Feb 28 09:59:30 crc kubenswrapper[4996]: I0228 09:59:30.471104 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbb75" event={"ID":"221739d1-3dab-41ed-9d61-e7cfb6fe4c84","Type":"ContainerStarted","Data":"ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401"} Feb 28 09:59:37 crc kubenswrapper[4996]: I0228 09:59:37.262452 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:37 crc kubenswrapper[4996]: I0228 09:59:37.262997 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:37 crc kubenswrapper[4996]: I0228 09:59:37.318394 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:37 crc kubenswrapper[4996]: I0228 09:59:37.337975 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cbb75" podStartSLOduration=9.572176591 podStartE2EDuration="11.337959645s" podCreationTimestamp="2026-02-28 09:59:26 +0000 UTC" firstStartedPulling="2026-02-28 09:59:28.452423675 +0000 UTC m=+3532.143226496" lastFinishedPulling="2026-02-28 09:59:30.218206719 +0000 UTC m=+3533.909009550" observedRunningTime="2026-02-28 09:59:30.498762908 +0000 UTC m=+3534.189565729" watchObservedRunningTime="2026-02-28 09:59:37.337959645 +0000 UTC m=+3541.028762446" Feb 28 09:59:37 crc kubenswrapper[4996]: I0228 09:59:37.589805 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:37 crc kubenswrapper[4996]: I0228 09:59:37.643597 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbb75"] Feb 28 09:59:39 crc kubenswrapper[4996]: I0228 09:59:39.540731 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cbb75" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerName="registry-server" containerID="cri-o://ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401" gracePeriod=2 Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.194849 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.210997 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-utilities\") pod \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.211126 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-catalog-content\") pod \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.211275 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8bw7\" (UniqueName: \"kubernetes.io/projected/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-kube-api-access-f8bw7\") pod \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\" (UID: \"221739d1-3dab-41ed-9d61-e7cfb6fe4c84\") " Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.214505 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-utilities" (OuterVolumeSpecName: "utilities") pod "221739d1-3dab-41ed-9d61-e7cfb6fe4c84" (UID: "221739d1-3dab-41ed-9d61-e7cfb6fe4c84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.229438 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-kube-api-access-f8bw7" (OuterVolumeSpecName: "kube-api-access-f8bw7") pod "221739d1-3dab-41ed-9d61-e7cfb6fe4c84" (UID: "221739d1-3dab-41ed-9d61-e7cfb6fe4c84"). InnerVolumeSpecName "kube-api-access-f8bw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.247786 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "221739d1-3dab-41ed-9d61-e7cfb6fe4c84" (UID: "221739d1-3dab-41ed-9d61-e7cfb6fe4c84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.313230 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.313263 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.313276 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8bw7\" (UniqueName: \"kubernetes.io/projected/221739d1-3dab-41ed-9d61-e7cfb6fe4c84-kube-api-access-f8bw7\") on node \"crc\" DevicePath \"\"" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.552360 4996 generic.go:334] "Generic (PLEG): container finished" podID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerID="ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401" exitCode=0 Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.552411 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbb75" event={"ID":"221739d1-3dab-41ed-9d61-e7cfb6fe4c84","Type":"ContainerDied","Data":"ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401"} Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.552447 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cbb75" event={"ID":"221739d1-3dab-41ed-9d61-e7cfb6fe4c84","Type":"ContainerDied","Data":"e0771fadcb045f3ea758e27f095cc381f54810cdbbff15433c3e8297486fe426"} Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.552505 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cbb75" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.552545 4996 scope.go:117] "RemoveContainer" containerID="ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.592519 4996 scope.go:117] "RemoveContainer" containerID="3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.592919 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbb75"] Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.605266 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cbb75"] Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.629094 4996 scope.go:117] "RemoveContainer" containerID="d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.660677 4996 scope.go:117] "RemoveContainer" containerID="ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401" Feb 28 09:59:40 crc kubenswrapper[4996]: E0228 09:59:40.661189 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401\": container with ID starting with ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401 not found: ID does not exist" containerID="ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.661220 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401"} err="failed to get container status \"ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401\": rpc error: code = NotFound desc = could not find container \"ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401\": container with ID starting with ec3fd999030fff93e4b2a9e7881aebf52d546351c2cc1af54e104482628f1401 not found: ID does not exist" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.661239 4996 scope.go:117] "RemoveContainer" containerID="3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0" Feb 28 09:59:40 crc kubenswrapper[4996]: E0228 09:59:40.661947 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0\": container with ID starting with 3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0 not found: ID does not exist" containerID="3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.661969 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0"} err="failed to get container status \"3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0\": rpc error: code = NotFound desc = could not find container \"3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0\": container with ID starting with 3c164ee4cd4dd1972c35b3c7e219dc28bed2bb447bd0e0973c9cb110f998e6a0 not found: ID does not exist" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.661981 4996 scope.go:117] "RemoveContainer" containerID="d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3" Feb 28 09:59:40 crc kubenswrapper[4996]: E0228 09:59:40.662233 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3\": container with ID starting with d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3 not found: ID does not exist" containerID="d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3" Feb 28 09:59:40 crc kubenswrapper[4996]: I0228 09:59:40.662268 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3"} err="failed to get container status \"d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3\": rpc error: code = NotFound desc = could not find container \"d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3\": container with ID starting with d2191e610645eb3766b51261441c969c0768a24db855d7b06f73395617ec91f3 not found: ID does not exist" Feb 28 09:59:41 crc kubenswrapper[4996]: I0228 09:59:41.045280 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" path="/var/lib/kubelet/pods/221739d1-3dab-41ed-9d61-e7cfb6fe4c84/volumes" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.147736 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537880-8twc2"] Feb 28 10:00:00 crc kubenswrapper[4996]: E0228 10:00:00.148609 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerName="extract-utilities" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.148622 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerName="extract-utilities" Feb 28 10:00:00 crc kubenswrapper[4996]: E0228 10:00:00.148646 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerName="extract-content" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.148653 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerName="extract-content" Feb 28 10:00:00 crc kubenswrapper[4996]: E0228 10:00:00.148666 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerName="registry-server" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.148672 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerName="registry-server" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.148852 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="221739d1-3dab-41ed-9d61-e7cfb6fe4c84" containerName="registry-server" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.149662 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537880-8twc2" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.153052 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.153216 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.153296 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.170275 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537880-8twc2"] Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.228601 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26c6v\" (UniqueName: \"kubernetes.io/projected/a8c89340-1270-4c96-8dc8-70f2a0c5b8a5-kube-api-access-26c6v\") pod \"auto-csr-approver-29537880-8twc2\" (UID: \"a8c89340-1270-4c96-8dc8-70f2a0c5b8a5\") " pod="openshift-infra/auto-csr-approver-29537880-8twc2" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.252026 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv"] Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.253513 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.257622 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.257901 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.265931 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv"] Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.330483 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26c6v\" (UniqueName: \"kubernetes.io/projected/a8c89340-1270-4c96-8dc8-70f2a0c5b8a5-kube-api-access-26c6v\") pod \"auto-csr-approver-29537880-8twc2\" (UID: \"a8c89340-1270-4c96-8dc8-70f2a0c5b8a5\") " pod="openshift-infra/auto-csr-approver-29537880-8twc2" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.330866 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77675dd-10db-46e8-9236-10af0a0d602a-config-volume\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.331077 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77675dd-10db-46e8-9236-10af0a0d602a-secret-volume\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.331269 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7nrq\" (UniqueName: \"kubernetes.io/projected/c77675dd-10db-46e8-9236-10af0a0d602a-kube-api-access-p7nrq\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.350871 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26c6v\" (UniqueName: \"kubernetes.io/projected/a8c89340-1270-4c96-8dc8-70f2a0c5b8a5-kube-api-access-26c6v\") pod \"auto-csr-approver-29537880-8twc2\" (UID: \"a8c89340-1270-4c96-8dc8-70f2a0c5b8a5\") " pod="openshift-infra/auto-csr-approver-29537880-8twc2" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.433616 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77675dd-10db-46e8-9236-10af0a0d602a-config-volume\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.433767 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77675dd-10db-46e8-9236-10af0a0d602a-secret-volume\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.433867 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7nrq\" (UniqueName: \"kubernetes.io/projected/c77675dd-10db-46e8-9236-10af0a0d602a-kube-api-access-p7nrq\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.435061 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77675dd-10db-46e8-9236-10af0a0d602a-config-volume\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.439044 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77675dd-10db-46e8-9236-10af0a0d602a-secret-volume\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.453227 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7nrq\" (UniqueName: \"kubernetes.io/projected/c77675dd-10db-46e8-9236-10af0a0d602a-kube-api-access-p7nrq\") pod \"collect-profiles-29537880-8jcgv\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.472153 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537880-8twc2" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.583313 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:00 crc kubenswrapper[4996]: I0228 10:00:00.942147 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537880-8twc2"] Feb 28 10:00:00 crc kubenswrapper[4996]: W0228 10:00:00.945236 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c89340_1270_4c96_8dc8_70f2a0c5b8a5.slice/crio-16d7a3d733904ac760e97aaada5028880203f7b33333707292e6a1612b0fe5c5 WatchSource:0}: Error finding container 16d7a3d733904ac760e97aaada5028880203f7b33333707292e6a1612b0fe5c5: Status 404 returned error can't find the container with id 16d7a3d733904ac760e97aaada5028880203f7b33333707292e6a1612b0fe5c5 Feb 28 10:00:01 crc kubenswrapper[4996]: I0228 10:00:01.080497 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv"] Feb 28 10:00:01 crc kubenswrapper[4996]: W0228 10:00:01.082114 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc77675dd_10db_46e8_9236_10af0a0d602a.slice/crio-ed9b4ace1fd45aa6ffe4ad4d6fcf1b2160a70c4baf6b1ce0bc4070753016b3f6 WatchSource:0}: Error finding container ed9b4ace1fd45aa6ffe4ad4d6fcf1b2160a70c4baf6b1ce0bc4070753016b3f6: Status 404 returned error can't find the container with id ed9b4ace1fd45aa6ffe4ad4d6fcf1b2160a70c4baf6b1ce0bc4070753016b3f6 Feb 28 10:00:01 crc kubenswrapper[4996]: I0228 10:00:01.748825 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537880-8twc2" event={"ID":"a8c89340-1270-4c96-8dc8-70f2a0c5b8a5","Type":"ContainerStarted","Data":"16d7a3d733904ac760e97aaada5028880203f7b33333707292e6a1612b0fe5c5"} Feb 28 10:00:01 crc kubenswrapper[4996]: I0228 10:00:01.750509 4996 generic.go:334] "Generic (PLEG): container finished" podID="c77675dd-10db-46e8-9236-10af0a0d602a" containerID="7afa7dd2c546e7a60459a7a4465fe693ce104655089d7ffae43c4824fa6307f2" exitCode=0 Feb 28 10:00:01 crc kubenswrapper[4996]: I0228 10:00:01.750609 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" event={"ID":"c77675dd-10db-46e8-9236-10af0a0d602a","Type":"ContainerDied","Data":"7afa7dd2c546e7a60459a7a4465fe693ce104655089d7ffae43c4824fa6307f2"} Feb 28 10:00:01 crc kubenswrapper[4996]: I0228 10:00:01.750679 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" event={"ID":"c77675dd-10db-46e8-9236-10af0a0d602a","Type":"ContainerStarted","Data":"ed9b4ace1fd45aa6ffe4ad4d6fcf1b2160a70c4baf6b1ce0bc4070753016b3f6"} Feb 28 10:00:01 crc kubenswrapper[4996]: E0228 10:00:01.826403 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc77675dd_10db_46e8_9236_10af0a0d602a.slice/crio-7afa7dd2c546e7a60459a7a4465fe693ce104655089d7ffae43c4824fa6307f2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc77675dd_10db_46e8_9236_10af0a0d602a.slice/crio-conmon-7afa7dd2c546e7a60459a7a4465fe693ce104655089d7ffae43c4824fa6307f2.scope\": RecentStats: unable to find data in memory cache]" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.254022 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.295701 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7nrq\" (UniqueName: \"kubernetes.io/projected/c77675dd-10db-46e8-9236-10af0a0d602a-kube-api-access-p7nrq\") pod \"c77675dd-10db-46e8-9236-10af0a0d602a\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.295916 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77675dd-10db-46e8-9236-10af0a0d602a-config-volume\") pod \"c77675dd-10db-46e8-9236-10af0a0d602a\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.296090 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77675dd-10db-46e8-9236-10af0a0d602a-secret-volume\") pod \"c77675dd-10db-46e8-9236-10af0a0d602a\" (UID: \"c77675dd-10db-46e8-9236-10af0a0d602a\") " Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.297165 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c77675dd-10db-46e8-9236-10af0a0d602a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c77675dd-10db-46e8-9236-10af0a0d602a" (UID: "c77675dd-10db-46e8-9236-10af0a0d602a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.300975 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c77675dd-10db-46e8-9236-10af0a0d602a-kube-api-access-p7nrq" (OuterVolumeSpecName: "kube-api-access-p7nrq") pod "c77675dd-10db-46e8-9236-10af0a0d602a" (UID: "c77675dd-10db-46e8-9236-10af0a0d602a"). InnerVolumeSpecName "kube-api-access-p7nrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.302489 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c77675dd-10db-46e8-9236-10af0a0d602a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c77675dd-10db-46e8-9236-10af0a0d602a" (UID: "c77675dd-10db-46e8-9236-10af0a0d602a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.399051 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7nrq\" (UniqueName: \"kubernetes.io/projected/c77675dd-10db-46e8-9236-10af0a0d602a-kube-api-access-p7nrq\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.399092 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c77675dd-10db-46e8-9236-10af0a0d602a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.399101 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c77675dd-10db-46e8-9236-10af0a0d602a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.768578 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" event={"ID":"c77675dd-10db-46e8-9236-10af0a0d602a","Type":"ContainerDied","Data":"ed9b4ace1fd45aa6ffe4ad4d6fcf1b2160a70c4baf6b1ce0bc4070753016b3f6"} Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.768627 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed9b4ace1fd45aa6ffe4ad4d6fcf1b2160a70c4baf6b1ce0bc4070753016b3f6" Feb 28 10:00:03 crc kubenswrapper[4996]: I0228 10:00:03.768648 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv" Feb 28 10:00:04 crc kubenswrapper[4996]: I0228 10:00:04.346688 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk"] Feb 28 10:00:04 crc kubenswrapper[4996]: I0228 10:00:04.358447 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537835-jnxrk"] Feb 28 10:00:04 crc kubenswrapper[4996]: I0228 10:00:04.781298 4996 generic.go:334] "Generic (PLEG): container finished" podID="a8c89340-1270-4c96-8dc8-70f2a0c5b8a5" containerID="bf2cb9d71a12eb59503c2b7d57f630f1c6537a683dac6c73f9c3bc1a3d6dd21b" exitCode=0 Feb 28 10:00:04 crc kubenswrapper[4996]: I0228 10:00:04.781353 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537880-8twc2" event={"ID":"a8c89340-1270-4c96-8dc8-70f2a0c5b8a5","Type":"ContainerDied","Data":"bf2cb9d71a12eb59503c2b7d57f630f1c6537a683dac6c73f9c3bc1a3d6dd21b"} Feb 28 10:00:05 crc kubenswrapper[4996]: I0228 10:00:05.047545 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40cfa7d1-3c7a-4e8c-b127-14b0177fe785" path="/var/lib/kubelet/pods/40cfa7d1-3c7a-4e8c-b127-14b0177fe785/volumes" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.335196 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537880-8twc2" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.366709 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26c6v\" (UniqueName: \"kubernetes.io/projected/a8c89340-1270-4c96-8dc8-70f2a0c5b8a5-kube-api-access-26c6v\") pod \"a8c89340-1270-4c96-8dc8-70f2a0c5b8a5\" (UID: \"a8c89340-1270-4c96-8dc8-70f2a0c5b8a5\") " Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.373937 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c89340-1270-4c96-8dc8-70f2a0c5b8a5-kube-api-access-26c6v" (OuterVolumeSpecName: "kube-api-access-26c6v") pod "a8c89340-1270-4c96-8dc8-70f2a0c5b8a5" (UID: "a8c89340-1270-4c96-8dc8-70f2a0c5b8a5"). InnerVolumeSpecName "kube-api-access-26c6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.469210 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26c6v\" (UniqueName: \"kubernetes.io/projected/a8c89340-1270-4c96-8dc8-70f2a0c5b8a5-kube-api-access-26c6v\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.715200 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wdqpp"] Feb 28 10:00:06 crc kubenswrapper[4996]: E0228 10:00:06.716508 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c89340-1270-4c96-8dc8-70f2a0c5b8a5" containerName="oc" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.716572 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c89340-1270-4c96-8dc8-70f2a0c5b8a5" containerName="oc" Feb 28 10:00:06 crc kubenswrapper[4996]: E0228 10:00:06.716628 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c77675dd-10db-46e8-9236-10af0a0d602a" containerName="collect-profiles" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.716641 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c77675dd-10db-46e8-9236-10af0a0d602a" containerName="collect-profiles" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.718039 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c89340-1270-4c96-8dc8-70f2a0c5b8a5" containerName="oc" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.718106 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c77675dd-10db-46e8-9236-10af0a0d602a" containerName="collect-profiles" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.730738 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.743427 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdqpp"] Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.777150 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-utilities\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.777245 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-catalog-content\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.777296 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26ths\" (UniqueName: \"kubernetes.io/projected/d3caceba-93cf-4476-ae26-c93373a8cdb3-kube-api-access-26ths\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.799881 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537880-8twc2" event={"ID":"a8c89340-1270-4c96-8dc8-70f2a0c5b8a5","Type":"ContainerDied","Data":"16d7a3d733904ac760e97aaada5028880203f7b33333707292e6a1612b0fe5c5"} Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.799923 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16d7a3d733904ac760e97aaada5028880203f7b33333707292e6a1612b0fe5c5" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.799961 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537880-8twc2" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.879709 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-catalog-content\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.879781 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26ths\" (UniqueName: \"kubernetes.io/projected/d3caceba-93cf-4476-ae26-c93373a8cdb3-kube-api-access-26ths\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.879898 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-utilities\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.880407 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-utilities\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.880598 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-catalog-content\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:06 crc kubenswrapper[4996]: I0228 10:00:06.901400 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26ths\" (UniqueName: \"kubernetes.io/projected/d3caceba-93cf-4476-ae26-c93373a8cdb3-kube-api-access-26ths\") pod \"redhat-operators-wdqpp\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:07 crc kubenswrapper[4996]: I0228 10:00:07.056995 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:07 crc kubenswrapper[4996]: I0228 10:00:07.421488 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537874-4dnmb"] Feb 28 10:00:07 crc kubenswrapper[4996]: I0228 10:00:07.433258 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537874-4dnmb"] Feb 28 10:00:07 crc kubenswrapper[4996]: I0228 10:00:07.539778 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdqpp"] Feb 28 10:00:07 crc kubenswrapper[4996]: I0228 10:00:07.809995 4996 generic.go:334] "Generic (PLEG): container finished" podID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerID="8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920" exitCode=0 Feb 28 10:00:07 crc kubenswrapper[4996]: I0228 10:00:07.810117 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqpp" event={"ID":"d3caceba-93cf-4476-ae26-c93373a8cdb3","Type":"ContainerDied","Data":"8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920"} Feb 28 10:00:07 crc kubenswrapper[4996]: I0228 10:00:07.810489 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqpp" event={"ID":"d3caceba-93cf-4476-ae26-c93373a8cdb3","Type":"ContainerStarted","Data":"239339110fa0ae7ccbc845257a50584ed54d437c39860592fef6d7abb78b308e"} Feb 28 10:00:09 crc kubenswrapper[4996]: I0228 10:00:09.045525 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c57accc5-b1c1-42ea-879e-de23414d5c68" path="/var/lib/kubelet/pods/c57accc5-b1c1-42ea-879e-de23414d5c68/volumes" Feb 28 10:00:09 crc kubenswrapper[4996]: I0228 10:00:09.840641 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqpp" event={"ID":"d3caceba-93cf-4476-ae26-c93373a8cdb3","Type":"ContainerStarted","Data":"7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316"} Feb 28 10:00:15 crc kubenswrapper[4996]: I0228 10:00:15.911648 4996 generic.go:334] "Generic (PLEG): container finished" podID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerID="7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316" exitCode=0 Feb 28 10:00:15 crc kubenswrapper[4996]: I0228 10:00:15.911738 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqpp" event={"ID":"d3caceba-93cf-4476-ae26-c93373a8cdb3","Type":"ContainerDied","Data":"7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316"} Feb 28 10:00:16 crc kubenswrapper[4996]: I0228 10:00:16.925822 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqpp" event={"ID":"d3caceba-93cf-4476-ae26-c93373a8cdb3","Type":"ContainerStarted","Data":"9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923"} Feb 28 10:00:16 crc kubenswrapper[4996]: I0228 10:00:16.960575 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wdqpp" podStartSLOduration=2.445546344 podStartE2EDuration="10.960555025s" podCreationTimestamp="2026-02-28 10:00:06 +0000 UTC" firstStartedPulling="2026-02-28 10:00:07.812419742 +0000 UTC m=+3571.503222553" lastFinishedPulling="2026-02-28 10:00:16.327428423 +0000 UTC m=+3580.018231234" observedRunningTime="2026-02-28 10:00:16.95419576 +0000 UTC m=+3580.644998661" watchObservedRunningTime="2026-02-28 10:00:16.960555025 +0000 UTC m=+3580.651357846" Feb 28 10:00:17 crc kubenswrapper[4996]: I0228 10:00:17.057646 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:17 crc kubenswrapper[4996]: I0228 10:00:17.057690 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:18 crc kubenswrapper[4996]: I0228 10:00:18.127967 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wdqpp" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="registry-server" probeResult="failure" output=< Feb 28 10:00:18 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:00:18 crc kubenswrapper[4996]: > Feb 28 10:00:28 crc kubenswrapper[4996]: I0228 10:00:28.107460 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wdqpp" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="registry-server" probeResult="failure" output=< Feb 28 10:00:28 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:00:28 crc kubenswrapper[4996]: > Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.451819 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4g7k8"] Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.453963 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.469522 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4g7k8"] Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.557197 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9lhb\" (UniqueName: \"kubernetes.io/projected/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-kube-api-access-n9lhb\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.557266 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-utilities\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.557672 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-catalog-content\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.659654 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9lhb\" (UniqueName: \"kubernetes.io/projected/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-kube-api-access-n9lhb\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.659733 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-utilities\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.659831 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-catalog-content\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.660671 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-utilities\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.660675 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-catalog-content\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.688207 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9lhb\" (UniqueName: \"kubernetes.io/projected/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-kube-api-access-n9lhb\") pod \"community-operators-4g7k8\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:29 crc kubenswrapper[4996]: I0228 10:00:29.785851 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:30 crc kubenswrapper[4996]: I0228 10:00:30.463440 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4g7k8"] Feb 28 10:00:31 crc kubenswrapper[4996]: I0228 10:00:31.104294 4996 generic.go:334] "Generic (PLEG): container finished" podID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerID="2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7" exitCode=0 Feb 28 10:00:31 crc kubenswrapper[4996]: I0228 10:00:31.104361 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g7k8" event={"ID":"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9","Type":"ContainerDied","Data":"2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7"} Feb 28 10:00:31 crc kubenswrapper[4996]: I0228 10:00:31.104620 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g7k8" event={"ID":"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9","Type":"ContainerStarted","Data":"baaa6ffef417b44ac67a9fcf7711b0d872f55572a9d4da5ab70a1ad79b103147"} Feb 28 10:00:32 crc kubenswrapper[4996]: I0228 10:00:32.114709 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g7k8" event={"ID":"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9","Type":"ContainerStarted","Data":"e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0"} Feb 28 10:00:33 crc kubenswrapper[4996]: I0228 10:00:33.122700 4996 generic.go:334] "Generic (PLEG): container finished" podID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerID="e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0" exitCode=0 Feb 28 10:00:33 crc kubenswrapper[4996]: I0228 10:00:33.122790 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g7k8" event={"ID":"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9","Type":"ContainerDied","Data":"e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0"} Feb 28 10:00:34 crc kubenswrapper[4996]: I0228 10:00:34.132154 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g7k8" event={"ID":"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9","Type":"ContainerStarted","Data":"51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874"} Feb 28 10:00:34 crc kubenswrapper[4996]: I0228 10:00:34.152932 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4g7k8" podStartSLOduration=2.73585555 podStartE2EDuration="5.152915088s" podCreationTimestamp="2026-02-28 10:00:29 +0000 UTC" firstStartedPulling="2026-02-28 10:00:31.106742804 +0000 UTC m=+3594.797545616" lastFinishedPulling="2026-02-28 10:00:33.523802343 +0000 UTC m=+3597.214605154" observedRunningTime="2026-02-28 10:00:34.148830908 +0000 UTC m=+3597.839633719" watchObservedRunningTime="2026-02-28 10:00:34.152915088 +0000 UTC m=+3597.843717899" Feb 28 10:00:37 crc kubenswrapper[4996]: I0228 10:00:37.122178 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:37 crc kubenswrapper[4996]: I0228 10:00:37.177798 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:37 crc kubenswrapper[4996]: I0228 10:00:37.834607 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdqpp"] Feb 28 10:00:38 crc kubenswrapper[4996]: I0228 10:00:38.167573 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wdqpp" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="registry-server" containerID="cri-o://9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923" gracePeriod=2 Feb 28 10:00:38 crc kubenswrapper[4996]: I0228 10:00:38.839789 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:38 crc kubenswrapper[4996]: I0228 10:00:38.956649 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26ths\" (UniqueName: \"kubernetes.io/projected/d3caceba-93cf-4476-ae26-c93373a8cdb3-kube-api-access-26ths\") pod \"d3caceba-93cf-4476-ae26-c93373a8cdb3\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " Feb 28 10:00:38 crc kubenswrapper[4996]: I0228 10:00:38.956826 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-catalog-content\") pod \"d3caceba-93cf-4476-ae26-c93373a8cdb3\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " Feb 28 10:00:38 crc kubenswrapper[4996]: I0228 10:00:38.956913 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-utilities\") pod \"d3caceba-93cf-4476-ae26-c93373a8cdb3\" (UID: \"d3caceba-93cf-4476-ae26-c93373a8cdb3\") " Feb 28 10:00:38 crc kubenswrapper[4996]: I0228 10:00:38.957517 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-utilities" (OuterVolumeSpecName: "utilities") pod "d3caceba-93cf-4476-ae26-c93373a8cdb3" (UID: "d3caceba-93cf-4476-ae26-c93373a8cdb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:00:38 crc kubenswrapper[4996]: I0228 10:00:38.961817 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3caceba-93cf-4476-ae26-c93373a8cdb3-kube-api-access-26ths" (OuterVolumeSpecName: "kube-api-access-26ths") pod "d3caceba-93cf-4476-ae26-c93373a8cdb3" (UID: "d3caceba-93cf-4476-ae26-c93373a8cdb3"). InnerVolumeSpecName "kube-api-access-26ths". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.060120 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26ths\" (UniqueName: \"kubernetes.io/projected/d3caceba-93cf-4476-ae26-c93373a8cdb3-kube-api-access-26ths\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.060405 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.113911 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3caceba-93cf-4476-ae26-c93373a8cdb3" (UID: "d3caceba-93cf-4476-ae26-c93373a8cdb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.163743 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3caceba-93cf-4476-ae26-c93373a8cdb3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.177628 4996 generic.go:334] "Generic (PLEG): container finished" podID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerID="9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923" exitCode=0 Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.177669 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqpp" event={"ID":"d3caceba-93cf-4476-ae26-c93373a8cdb3","Type":"ContainerDied","Data":"9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923"} Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.177681 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdqpp" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.177698 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdqpp" event={"ID":"d3caceba-93cf-4476-ae26-c93373a8cdb3","Type":"ContainerDied","Data":"239339110fa0ae7ccbc845257a50584ed54d437c39860592fef6d7abb78b308e"} Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.177717 4996 scope.go:117] "RemoveContainer" containerID="9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.214437 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdqpp"] Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.217322 4996 scope.go:117] "RemoveContainer" containerID="7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.236292 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wdqpp"] Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.243349 4996 scope.go:117] "RemoveContainer" containerID="8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.305486 4996 scope.go:117] "RemoveContainer" containerID="9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923" Feb 28 10:00:39 crc kubenswrapper[4996]: E0228 10:00:39.306342 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923\": container with ID starting with 9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923 not found: ID does not exist" containerID="9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.306384 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923"} err="failed to get container status \"9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923\": rpc error: code = NotFound desc = could not find container \"9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923\": container with ID starting with 9d0182334346ca8f2e62d2f76a750961841d0b089906f23b1181bbef8caeb923 not found: ID does not exist" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.306410 4996 scope.go:117] "RemoveContainer" containerID="7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316" Feb 28 10:00:39 crc kubenswrapper[4996]: E0228 10:00:39.307308 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316\": container with ID starting with 7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316 not found: ID does not exist" containerID="7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.307361 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316"} err="failed to get container status \"7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316\": rpc error: code = NotFound desc = could not find container \"7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316\": container with ID starting with 7d847d8f1c6d086666fbf30d28430ba834b88f3e6075ad65e3b7e02727e39316 not found: ID does not exist" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.307389 4996 scope.go:117] "RemoveContainer" containerID="8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920" Feb 28 10:00:39 crc kubenswrapper[4996]: E0228 10:00:39.308536 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920\": container with ID starting with 8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920 not found: ID does not exist" containerID="8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.308560 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920"} err="failed to get container status \"8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920\": rpc error: code = NotFound desc = could not find container \"8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920\": container with ID starting with 8ce2e2ac7efcabddec266f83a31ef2abbe89d5ded001aa698cd06af3c693b920 not found: ID does not exist" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.786239 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.786330 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:39 crc kubenswrapper[4996]: I0228 10:00:39.858496 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:40 crc kubenswrapper[4996]: I0228 10:00:40.230669 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:41 crc kubenswrapper[4996]: I0228 10:00:41.045801 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" path="/var/lib/kubelet/pods/d3caceba-93cf-4476-ae26-c93373a8cdb3/volumes" Feb 28 10:00:42 crc kubenswrapper[4996]: I0228 10:00:42.230093 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4g7k8"] Feb 28 10:00:42 crc kubenswrapper[4996]: I0228 10:00:42.230316 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4g7k8" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerName="registry-server" containerID="cri-o://51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874" gracePeriod=2 Feb 28 10:00:42 crc kubenswrapper[4996]: I0228 10:00:42.864444 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.034569 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-utilities\") pod \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.035188 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9lhb\" (UniqueName: \"kubernetes.io/projected/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-kube-api-access-n9lhb\") pod \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.035294 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-catalog-content\") pod \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\" (UID: \"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9\") " Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.051276 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-utilities" (OuterVolumeSpecName: "utilities") pod "4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" (UID: "4734aef0-db2c-4a43-a0d0-697ce6f6f2e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.082624 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-kube-api-access-n9lhb" (OuterVolumeSpecName: "kube-api-access-n9lhb") pod "4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" (UID: "4734aef0-db2c-4a43-a0d0-697ce6f6f2e9"). InnerVolumeSpecName "kube-api-access-n9lhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.128595 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" (UID: "4734aef0-db2c-4a43-a0d0-697ce6f6f2e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.152801 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9lhb\" (UniqueName: \"kubernetes.io/projected/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-kube-api-access-n9lhb\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.152836 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.152845 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.218615 4996 generic.go:334] "Generic (PLEG): container finished" podID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerID="51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874" exitCode=0 Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.218664 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g7k8" event={"ID":"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9","Type":"ContainerDied","Data":"51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874"} Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.218689 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g7k8" event={"ID":"4734aef0-db2c-4a43-a0d0-697ce6f6f2e9","Type":"ContainerDied","Data":"baaa6ffef417b44ac67a9fcf7711b0d872f55572a9d4da5ab70a1ad79b103147"} Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.218688 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g7k8" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.218705 4996 scope.go:117] "RemoveContainer" containerID="51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.241539 4996 scope.go:117] "RemoveContainer" containerID="e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.263848 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4g7k8"] Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.293571 4996 scope.go:117] "RemoveContainer" containerID="2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.297905 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4g7k8"] Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.325507 4996 scope.go:117] "RemoveContainer" containerID="51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874" Feb 28 10:00:43 crc kubenswrapper[4996]: E0228 10:00:43.325952 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874\": container with ID starting with 51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874 not found: ID does not exist" containerID="51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.325996 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874"} err="failed to get container status \"51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874\": rpc error: code = NotFound desc = could not find container \"51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874\": container with ID starting with 51c0913f88da7955b5c581d3dea87c8a4bed7d9aaec5904a272aa410c048a874 not found: ID does not exist" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.326042 4996 scope.go:117] "RemoveContainer" containerID="e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0" Feb 28 10:00:43 crc kubenswrapper[4996]: E0228 10:00:43.326363 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0\": container with ID starting with e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0 not found: ID does not exist" containerID="e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.326397 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0"} err="failed to get container status \"e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0\": rpc error: code = NotFound desc = could not find container \"e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0\": container with ID starting with e2936cf4ada1afe23e54b6197a205fa1fcbaf529a595f61199f8d959b3bda2f0 not found: ID does not exist" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.326417 4996 scope.go:117] "RemoveContainer" containerID="2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7" Feb 28 10:00:43 crc kubenswrapper[4996]: E0228 10:00:43.326733 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7\": container with ID starting with 2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7 not found: ID does not exist" containerID="2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7" Feb 28 10:00:43 crc kubenswrapper[4996]: I0228 10:00:43.326778 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7"} err="failed to get container status \"2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7\": rpc error: code = NotFound desc = could not find container \"2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7\": container with ID starting with 2a111aacf08bec18798a7ee6826120bf20c35395f7ad5aa32a7316006354f9d7 not found: ID does not exist" Feb 28 10:00:45 crc kubenswrapper[4996]: I0228 10:00:45.043781 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" path="/var/lib/kubelet/pods/4734aef0-db2c-4a43-a0d0-697ce6f6f2e9/volumes" Feb 28 10:00:46 crc kubenswrapper[4996]: I0228 10:00:46.562243 4996 scope.go:117] "RemoveContainer" containerID="7c5070893bec19de14dcc6cc41c7443e80176921805c56c1006155be074bff60" Feb 28 10:00:46 crc kubenswrapper[4996]: I0228 10:00:46.595662 4996 scope.go:117] "RemoveContainer" containerID="47fffa18b047164770b7b2cd070c6070dfe48129fc140c273cfe64fc3c3cf968" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.162027 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29537881-447jn"] Feb 28 10:01:00 crc kubenswrapper[4996]: E0228 10:01:00.165518 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="registry-server" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.165556 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="registry-server" Feb 28 10:01:00 crc kubenswrapper[4996]: E0228 10:01:00.165579 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerName="registry-server" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.165593 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerName="registry-server" Feb 28 10:01:00 crc kubenswrapper[4996]: E0228 10:01:00.165623 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerName="extract-content" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.165634 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerName="extract-content" Feb 28 10:01:00 crc kubenswrapper[4996]: E0228 10:01:00.165650 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="extract-utilities" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.165662 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="extract-utilities" Feb 28 10:01:00 crc kubenswrapper[4996]: E0228 10:01:00.165678 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="extract-content" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.165688 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="extract-content" Feb 28 10:01:00 crc kubenswrapper[4996]: E0228 10:01:00.165723 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerName="extract-utilities" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.165732 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerName="extract-utilities" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.165992 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4734aef0-db2c-4a43-a0d0-697ce6f6f2e9" containerName="registry-server" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.166049 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3caceba-93cf-4476-ae26-c93373a8cdb3" containerName="registry-server" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.166845 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.177528 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29537881-447jn"] Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.211489 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7mt\" (UniqueName: \"kubernetes.io/projected/d637ef52-36d0-4c60-8bef-201d71cac614-kube-api-access-cn7mt\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.211908 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-config-data\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.211943 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-combined-ca-bundle\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.211969 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-fernet-keys\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.313831 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-config-data\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.313872 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-combined-ca-bundle\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.313898 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-fernet-keys\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.314092 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn7mt\" (UniqueName: \"kubernetes.io/projected/d637ef52-36d0-4c60-8bef-201d71cac614-kube-api-access-cn7mt\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.320650 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-combined-ca-bundle\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.323359 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-fernet-keys\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.324895 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-config-data\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.342678 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn7mt\" (UniqueName: \"kubernetes.io/projected/d637ef52-36d0-4c60-8bef-201d71cac614-kube-api-access-cn7mt\") pod \"keystone-cron-29537881-447jn\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.490509 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:00 crc kubenswrapper[4996]: I0228 10:01:00.976693 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29537881-447jn"] Feb 28 10:01:01 crc kubenswrapper[4996]: I0228 10:01:01.384103 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537881-447jn" event={"ID":"d637ef52-36d0-4c60-8bef-201d71cac614","Type":"ContainerStarted","Data":"11666c7d2260a243500201850a2dbabfcbae09f1f96bcb32ca1f375a1fbd609b"} Feb 28 10:01:01 crc kubenswrapper[4996]: I0228 10:01:01.384483 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537881-447jn" event={"ID":"d637ef52-36d0-4c60-8bef-201d71cac614","Type":"ContainerStarted","Data":"e05039499a30a5863e3ccc39272b43c6aeedbdde83d73ac3bc0771f456bfd057"} Feb 28 10:01:01 crc kubenswrapper[4996]: I0228 10:01:01.402703 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29537881-447jn" podStartSLOduration=1.402685477 podStartE2EDuration="1.402685477s" podCreationTimestamp="2026-02-28 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 10:01:01.398239559 +0000 UTC m=+3625.089042370" watchObservedRunningTime="2026-02-28 10:01:01.402685477 +0000 UTC m=+3625.093488288" Feb 28 10:01:04 crc kubenswrapper[4996]: I0228 10:01:04.411823 4996 generic.go:334] "Generic (PLEG): container finished" podID="d637ef52-36d0-4c60-8bef-201d71cac614" containerID="11666c7d2260a243500201850a2dbabfcbae09f1f96bcb32ca1f375a1fbd609b" exitCode=0 Feb 28 10:01:04 crc kubenswrapper[4996]: I0228 10:01:04.411897 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537881-447jn" event={"ID":"d637ef52-36d0-4c60-8bef-201d71cac614","Type":"ContainerDied","Data":"11666c7d2260a243500201850a2dbabfcbae09f1f96bcb32ca1f375a1fbd609b"} Feb 28 10:01:05 crc kubenswrapper[4996]: I0228 10:01:05.951496 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.041728 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-fernet-keys\") pod \"d637ef52-36d0-4c60-8bef-201d71cac614\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.041847 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn7mt\" (UniqueName: \"kubernetes.io/projected/d637ef52-36d0-4c60-8bef-201d71cac614-kube-api-access-cn7mt\") pod \"d637ef52-36d0-4c60-8bef-201d71cac614\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.041939 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-combined-ca-bundle\") pod \"d637ef52-36d0-4c60-8bef-201d71cac614\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.042024 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-config-data\") pod \"d637ef52-36d0-4c60-8bef-201d71cac614\" (UID: \"d637ef52-36d0-4c60-8bef-201d71cac614\") " Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.047700 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d637ef52-36d0-4c60-8bef-201d71cac614-kube-api-access-cn7mt" (OuterVolumeSpecName: "kube-api-access-cn7mt") pod "d637ef52-36d0-4c60-8bef-201d71cac614" (UID: "d637ef52-36d0-4c60-8bef-201d71cac614"). InnerVolumeSpecName "kube-api-access-cn7mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.049189 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d637ef52-36d0-4c60-8bef-201d71cac614" (UID: "d637ef52-36d0-4c60-8bef-201d71cac614"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.096235 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d637ef52-36d0-4c60-8bef-201d71cac614" (UID: "d637ef52-36d0-4c60-8bef-201d71cac614"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.119785 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-config-data" (OuterVolumeSpecName: "config-data") pod "d637ef52-36d0-4c60-8bef-201d71cac614" (UID: "d637ef52-36d0-4c60-8bef-201d71cac614"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.148355 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn7mt\" (UniqueName: \"kubernetes.io/projected/d637ef52-36d0-4c60-8bef-201d71cac614-kube-api-access-cn7mt\") on node \"crc\" DevicePath \"\"" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.148389 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.148400 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.148409 4996 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d637ef52-36d0-4c60-8bef-201d71cac614-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.436923 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537881-447jn" event={"ID":"d637ef52-36d0-4c60-8bef-201d71cac614","Type":"ContainerDied","Data":"e05039499a30a5863e3ccc39272b43c6aeedbdde83d73ac3bc0771f456bfd057"} Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.437279 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e05039499a30a5863e3ccc39272b43c6aeedbdde83d73ac3bc0771f456bfd057" Feb 28 10:01:06 crc kubenswrapper[4996]: I0228 10:01:06.436996 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537881-447jn" Feb 28 10:01:12 crc kubenswrapper[4996]: I0228 10:01:12.248589 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:01:12 crc kubenswrapper[4996]: I0228 10:01:12.249093 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:01:18 crc kubenswrapper[4996]: I0228 10:01:18.046150 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-3817-account-create-update-24mt4"] Feb 28 10:01:18 crc kubenswrapper[4996]: I0228 10:01:18.053881 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-gjcc8"] Feb 28 10:01:18 crc kubenswrapper[4996]: I0228 10:01:18.062704 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-gjcc8"] Feb 28 10:01:18 crc kubenswrapper[4996]: I0228 10:01:18.074663 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-3817-account-create-update-24mt4"] Feb 28 10:01:19 crc kubenswrapper[4996]: I0228 10:01:19.045906 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05a033eb-bb26-4a33-88a3-3c7e2099329b" path="/var/lib/kubelet/pods/05a033eb-bb26-4a33-88a3-3c7e2099329b/volumes" Feb 28 10:01:19 crc kubenswrapper[4996]: I0228 10:01:19.046931 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fc5e08-1d74-4a25-b7f5-824b82c70591" path="/var/lib/kubelet/pods/a5fc5e08-1d74-4a25-b7f5-824b82c70591/volumes" Feb 28 10:01:37 crc kubenswrapper[4996]: I0228 10:01:37.061351 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-cd4qw"] Feb 28 10:01:37 crc kubenswrapper[4996]: I0228 10:01:37.063474 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-cd4qw"] Feb 28 10:01:39 crc kubenswrapper[4996]: I0228 10:01:39.045346 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9647fc5-2585-46e2-aa04-045cdeb86e5c" path="/var/lib/kubelet/pods/f9647fc5-2585-46e2-aa04-045cdeb86e5c/volumes" Feb 28 10:01:42 crc kubenswrapper[4996]: I0228 10:01:42.249066 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:01:42 crc kubenswrapper[4996]: I0228 10:01:42.249425 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:01:46 crc kubenswrapper[4996]: I0228 10:01:46.766463 4996 scope.go:117] "RemoveContainer" containerID="fa4ade38c652a1c82f075efa6dc80097763091072fe97137327c3926e21a08d4" Feb 28 10:01:46 crc kubenswrapper[4996]: I0228 10:01:46.807626 4996 scope.go:117] "RemoveContainer" containerID="ba64155110b18c9cfb0773d00f2bdd9b8c041cae7f68bc6bcd57a86737e4ad9f" Feb 28 10:01:46 crc kubenswrapper[4996]: I0228 10:01:46.856521 4996 scope.go:117] "RemoveContainer" containerID="e00d1a365d9cacd2c0e22d7b97375f207200e1df16e1fe4be87252115a83ead0" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.140251 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537882-bc9k8"] Feb 28 10:02:00 crc kubenswrapper[4996]: E0228 10:02:00.141085 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d637ef52-36d0-4c60-8bef-201d71cac614" containerName="keystone-cron" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.141100 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d637ef52-36d0-4c60-8bef-201d71cac614" containerName="keystone-cron" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.141337 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d637ef52-36d0-4c60-8bef-201d71cac614" containerName="keystone-cron" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.142059 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537882-bc9k8" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.147037 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.147345 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.147409 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.166107 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537882-bc9k8"] Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.264434 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwvx2\" (UniqueName: \"kubernetes.io/projected/235311a5-1051-4a97-9523-ee0de1a5f461-kube-api-access-mwvx2\") pod \"auto-csr-approver-29537882-bc9k8\" (UID: \"235311a5-1051-4a97-9523-ee0de1a5f461\") " pod="openshift-infra/auto-csr-approver-29537882-bc9k8" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.365809 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwvx2\" (UniqueName: \"kubernetes.io/projected/235311a5-1051-4a97-9523-ee0de1a5f461-kube-api-access-mwvx2\") pod \"auto-csr-approver-29537882-bc9k8\" (UID: \"235311a5-1051-4a97-9523-ee0de1a5f461\") " pod="openshift-infra/auto-csr-approver-29537882-bc9k8" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.388405 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwvx2\" (UniqueName: \"kubernetes.io/projected/235311a5-1051-4a97-9523-ee0de1a5f461-kube-api-access-mwvx2\") pod \"auto-csr-approver-29537882-bc9k8\" (UID: \"235311a5-1051-4a97-9523-ee0de1a5f461\") " pod="openshift-infra/auto-csr-approver-29537882-bc9k8" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.462459 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537882-bc9k8" Feb 28 10:02:00 crc kubenswrapper[4996]: I0228 10:02:00.937477 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537882-bc9k8"] Feb 28 10:02:01 crc kubenswrapper[4996]: I0228 10:02:01.942908 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537882-bc9k8" event={"ID":"235311a5-1051-4a97-9523-ee0de1a5f461","Type":"ContainerStarted","Data":"de4b7b5240a2565d089cd6056bd64510bed626ba6e669cf5b5335ceb94bbb1bc"} Feb 28 10:02:02 crc kubenswrapper[4996]: I0228 10:02:02.951469 4996 generic.go:334] "Generic (PLEG): container finished" podID="235311a5-1051-4a97-9523-ee0de1a5f461" containerID="e2cfc4b485654c66327f216132e6bab7d4e3fc8f554d53679ae77e17c908acb0" exitCode=0 Feb 28 10:02:02 crc kubenswrapper[4996]: I0228 10:02:02.951571 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537882-bc9k8" event={"ID":"235311a5-1051-4a97-9523-ee0de1a5f461","Type":"ContainerDied","Data":"e2cfc4b485654c66327f216132e6bab7d4e3fc8f554d53679ae77e17c908acb0"} Feb 28 10:02:04 crc kubenswrapper[4996]: I0228 10:02:04.484083 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537882-bc9k8" Feb 28 10:02:04 crc kubenswrapper[4996]: I0228 10:02:04.552089 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwvx2\" (UniqueName: \"kubernetes.io/projected/235311a5-1051-4a97-9523-ee0de1a5f461-kube-api-access-mwvx2\") pod \"235311a5-1051-4a97-9523-ee0de1a5f461\" (UID: \"235311a5-1051-4a97-9523-ee0de1a5f461\") " Feb 28 10:02:04 crc kubenswrapper[4996]: I0228 10:02:04.568250 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/235311a5-1051-4a97-9523-ee0de1a5f461-kube-api-access-mwvx2" (OuterVolumeSpecName: "kube-api-access-mwvx2") pod "235311a5-1051-4a97-9523-ee0de1a5f461" (UID: "235311a5-1051-4a97-9523-ee0de1a5f461"). InnerVolumeSpecName "kube-api-access-mwvx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:02:04 crc kubenswrapper[4996]: I0228 10:02:04.656573 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwvx2\" (UniqueName: \"kubernetes.io/projected/235311a5-1051-4a97-9523-ee0de1a5f461-kube-api-access-mwvx2\") on node \"crc\" DevicePath \"\"" Feb 28 10:02:04 crc kubenswrapper[4996]: I0228 10:02:04.970257 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537882-bc9k8" event={"ID":"235311a5-1051-4a97-9523-ee0de1a5f461","Type":"ContainerDied","Data":"de4b7b5240a2565d089cd6056bd64510bed626ba6e669cf5b5335ceb94bbb1bc"} Feb 28 10:02:04 crc kubenswrapper[4996]: I0228 10:02:04.970485 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de4b7b5240a2565d089cd6056bd64510bed626ba6e669cf5b5335ceb94bbb1bc" Feb 28 10:02:04 crc kubenswrapper[4996]: I0228 10:02:04.970336 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537882-bc9k8" Feb 28 10:02:05 crc kubenswrapper[4996]: I0228 10:02:05.557093 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537876-xvjg9"] Feb 28 10:02:05 crc kubenswrapper[4996]: I0228 10:02:05.563989 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537876-xvjg9"] Feb 28 10:02:07 crc kubenswrapper[4996]: I0228 10:02:07.043826 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c2ecae-bac9-43ef-8f00-826a2cb721f1" path="/var/lib/kubelet/pods/23c2ecae-bac9-43ef-8f00-826a2cb721f1/volumes" Feb 28 10:02:12 crc kubenswrapper[4996]: I0228 10:02:12.249336 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:02:12 crc kubenswrapper[4996]: I0228 10:02:12.249877 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:02:12 crc kubenswrapper[4996]: I0228 10:02:12.249926 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:02:12 crc kubenswrapper[4996]: I0228 10:02:12.250715 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:02:12 crc kubenswrapper[4996]: I0228 10:02:12.250774 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" gracePeriod=600 Feb 28 10:02:12 crc kubenswrapper[4996]: E0228 10:02:12.382530 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:02:13 crc kubenswrapper[4996]: I0228 10:02:13.043817 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" exitCode=0 Feb 28 10:02:13 crc kubenswrapper[4996]: I0228 10:02:13.043938 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a"} Feb 28 10:02:13 crc kubenswrapper[4996]: I0228 10:02:13.044131 4996 scope.go:117] "RemoveContainer" containerID="6eb116081369391456f05d941d1318d5b3658ce08cbf4ef14c5126d1f232921f" Feb 28 10:02:13 crc kubenswrapper[4996]: I0228 10:02:13.046575 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:02:13 crc kubenswrapper[4996]: E0228 10:02:13.047156 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:02:28 crc kubenswrapper[4996]: I0228 10:02:28.033246 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:02:28 crc kubenswrapper[4996]: E0228 10:02:28.034153 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:02:40 crc kubenswrapper[4996]: I0228 10:02:40.033294 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:02:40 crc kubenswrapper[4996]: E0228 10:02:40.034526 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:02:46 crc kubenswrapper[4996]: I0228 10:02:46.956372 4996 scope.go:117] "RemoveContainer" containerID="36313123119f69d49600b5140a48b083eae9ff9efae9cfbfab6a65f6ebd554b2" Feb 28 10:02:53 crc kubenswrapper[4996]: I0228 10:02:53.032949 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:02:53 crc kubenswrapper[4996]: E0228 10:02:53.033671 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:03:08 crc kubenswrapper[4996]: I0228 10:03:08.033851 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:03:08 crc kubenswrapper[4996]: E0228 10:03:08.034670 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:03:20 crc kubenswrapper[4996]: I0228 10:03:20.033883 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:03:20 crc kubenswrapper[4996]: E0228 10:03:20.034969 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:03:32 crc kubenswrapper[4996]: I0228 10:03:32.032710 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:03:32 crc kubenswrapper[4996]: E0228 10:03:32.034267 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:03:46 crc kubenswrapper[4996]: I0228 10:03:46.033538 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:03:46 crc kubenswrapper[4996]: E0228 10:03:46.034474 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.032658 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:04:00 crc kubenswrapper[4996]: E0228 10:04:00.033605 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.148691 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537884-cfpmr"] Feb 28 10:04:00 crc kubenswrapper[4996]: E0228 10:04:00.149139 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235311a5-1051-4a97-9523-ee0de1a5f461" containerName="oc" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.149155 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="235311a5-1051-4a97-9523-ee0de1a5f461" containerName="oc" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.149322 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="235311a5-1051-4a97-9523-ee0de1a5f461" containerName="oc" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.150595 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537884-cfpmr" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.152974 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.158644 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.158731 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.166759 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537884-cfpmr"] Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.228510 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4bsd\" (UniqueName: \"kubernetes.io/projected/ae0c33c9-4a25-4abd-b05b-4315c7292773-kube-api-access-n4bsd\") pod \"auto-csr-approver-29537884-cfpmr\" (UID: \"ae0c33c9-4a25-4abd-b05b-4315c7292773\") " pod="openshift-infra/auto-csr-approver-29537884-cfpmr" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.330480 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4bsd\" (UniqueName: \"kubernetes.io/projected/ae0c33c9-4a25-4abd-b05b-4315c7292773-kube-api-access-n4bsd\") pod \"auto-csr-approver-29537884-cfpmr\" (UID: \"ae0c33c9-4a25-4abd-b05b-4315c7292773\") " pod="openshift-infra/auto-csr-approver-29537884-cfpmr" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.357041 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4bsd\" (UniqueName: \"kubernetes.io/projected/ae0c33c9-4a25-4abd-b05b-4315c7292773-kube-api-access-n4bsd\") pod \"auto-csr-approver-29537884-cfpmr\" (UID: \"ae0c33c9-4a25-4abd-b05b-4315c7292773\") " pod="openshift-infra/auto-csr-approver-29537884-cfpmr" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.474500 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537884-cfpmr" Feb 28 10:04:00 crc kubenswrapper[4996]: I0228 10:04:00.936112 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537884-cfpmr"] Feb 28 10:04:00 crc kubenswrapper[4996]: W0228 10:04:00.938368 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae0c33c9_4a25_4abd_b05b_4315c7292773.slice/crio-54d44908cf990958736dbc143eb0f9e1a0affe0bb11113ad0b5604eb17690e4a WatchSource:0}: Error finding container 54d44908cf990958736dbc143eb0f9e1a0affe0bb11113ad0b5604eb17690e4a: Status 404 returned error can't find the container with id 54d44908cf990958736dbc143eb0f9e1a0affe0bb11113ad0b5604eb17690e4a Feb 28 10:04:01 crc kubenswrapper[4996]: I0228 10:04:01.005750 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537884-cfpmr" event={"ID":"ae0c33c9-4a25-4abd-b05b-4315c7292773","Type":"ContainerStarted","Data":"54d44908cf990958736dbc143eb0f9e1a0affe0bb11113ad0b5604eb17690e4a"} Feb 28 10:04:03 crc kubenswrapper[4996]: I0228 10:04:03.049120 4996 generic.go:334] "Generic (PLEG): container finished" podID="ae0c33c9-4a25-4abd-b05b-4315c7292773" containerID="8ccd1577f3f869a783b680332035be246c55184bfadb335e35684de5ccb16378" exitCode=0 Feb 28 10:04:03 crc kubenswrapper[4996]: I0228 10:04:03.064245 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537884-cfpmr" event={"ID":"ae0c33c9-4a25-4abd-b05b-4315c7292773","Type":"ContainerDied","Data":"8ccd1577f3f869a783b680332035be246c55184bfadb335e35684de5ccb16378"} Feb 28 10:04:04 crc kubenswrapper[4996]: I0228 10:04:04.592434 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537884-cfpmr" Feb 28 10:04:04 crc kubenswrapper[4996]: I0228 10:04:04.715974 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4bsd\" (UniqueName: \"kubernetes.io/projected/ae0c33c9-4a25-4abd-b05b-4315c7292773-kube-api-access-n4bsd\") pod \"ae0c33c9-4a25-4abd-b05b-4315c7292773\" (UID: \"ae0c33c9-4a25-4abd-b05b-4315c7292773\") " Feb 28 10:04:04 crc kubenswrapper[4996]: I0228 10:04:04.721875 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0c33c9-4a25-4abd-b05b-4315c7292773-kube-api-access-n4bsd" (OuterVolumeSpecName: "kube-api-access-n4bsd") pod "ae0c33c9-4a25-4abd-b05b-4315c7292773" (UID: "ae0c33c9-4a25-4abd-b05b-4315c7292773"). InnerVolumeSpecName "kube-api-access-n4bsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:04:04 crc kubenswrapper[4996]: I0228 10:04:04.818433 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4bsd\" (UniqueName: \"kubernetes.io/projected/ae0c33c9-4a25-4abd-b05b-4315c7292773-kube-api-access-n4bsd\") on node \"crc\" DevicePath \"\"" Feb 28 10:04:05 crc kubenswrapper[4996]: I0228 10:04:05.065986 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537884-cfpmr" event={"ID":"ae0c33c9-4a25-4abd-b05b-4315c7292773","Type":"ContainerDied","Data":"54d44908cf990958736dbc143eb0f9e1a0affe0bb11113ad0b5604eb17690e4a"} Feb 28 10:04:05 crc kubenswrapper[4996]: I0228 10:04:05.066037 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54d44908cf990958736dbc143eb0f9e1a0affe0bb11113ad0b5604eb17690e4a" Feb 28 10:04:05 crc kubenswrapper[4996]: I0228 10:04:05.066120 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537884-cfpmr" Feb 28 10:04:05 crc kubenswrapper[4996]: I0228 10:04:05.686978 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537878-4qnfr"] Feb 28 10:04:05 crc kubenswrapper[4996]: I0228 10:04:05.702634 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537878-4qnfr"] Feb 28 10:04:07 crc kubenswrapper[4996]: I0228 10:04:07.050789 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482cb3db-191b-4fee-b655-e74426932bdf" path="/var/lib/kubelet/pods/482cb3db-191b-4fee-b655-e74426932bdf/volumes" Feb 28 10:04:11 crc kubenswrapper[4996]: I0228 10:04:11.034089 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:04:11 crc kubenswrapper[4996]: E0228 10:04:11.034846 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:04:23 crc kubenswrapper[4996]: I0228 10:04:23.033168 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:04:23 crc kubenswrapper[4996]: E0228 10:04:23.034201 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:04:38 crc kubenswrapper[4996]: I0228 10:04:38.033548 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:04:38 crc kubenswrapper[4996]: E0228 10:04:38.034789 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:04:47 crc kubenswrapper[4996]: I0228 10:04:47.090896 4996 scope.go:117] "RemoveContainer" containerID="348d1606179f0d44ad3599dfbbf0bd1479bd3895c578492a33e0665dfc9d6e4d" Feb 28 10:04:50 crc kubenswrapper[4996]: I0228 10:04:50.034051 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:04:50 crc kubenswrapper[4996]: E0228 10:04:50.036517 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:05:01 crc kubenswrapper[4996]: I0228 10:05:01.033575 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:05:01 crc kubenswrapper[4996]: E0228 10:05:01.034478 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:05:14 crc kubenswrapper[4996]: I0228 10:05:14.033222 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:05:14 crc kubenswrapper[4996]: E0228 10:05:14.034277 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:05:28 crc kubenswrapper[4996]: I0228 10:05:28.032653 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:05:28 crc kubenswrapper[4996]: E0228 10:05:28.034553 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:05:40 crc kubenswrapper[4996]: I0228 10:05:40.033764 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:05:40 crc kubenswrapper[4996]: E0228 10:05:40.034773 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:05:52 crc kubenswrapper[4996]: I0228 10:05:52.033550 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:05:52 crc kubenswrapper[4996]: E0228 10:05:52.034482 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.143567 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537886-7bc7m"] Feb 28 10:06:00 crc kubenswrapper[4996]: E0228 10:06:00.146457 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0c33c9-4a25-4abd-b05b-4315c7292773" containerName="oc" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.146645 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0c33c9-4a25-4abd-b05b-4315c7292773" containerName="oc" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.147351 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0c33c9-4a25-4abd-b05b-4315c7292773" containerName="oc" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.148543 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537886-7bc7m" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.152944 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537886-7bc7m"] Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.153249 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.153503 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.153594 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.222897 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2n5g\" (UniqueName: \"kubernetes.io/projected/b9c725b1-c96f-40f0-9642-a4e5f2e2588a-kube-api-access-t2n5g\") pod \"auto-csr-approver-29537886-7bc7m\" (UID: \"b9c725b1-c96f-40f0-9642-a4e5f2e2588a\") " pod="openshift-infra/auto-csr-approver-29537886-7bc7m" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.324485 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2n5g\" (UniqueName: \"kubernetes.io/projected/b9c725b1-c96f-40f0-9642-a4e5f2e2588a-kube-api-access-t2n5g\") pod \"auto-csr-approver-29537886-7bc7m\" (UID: \"b9c725b1-c96f-40f0-9642-a4e5f2e2588a\") " pod="openshift-infra/auto-csr-approver-29537886-7bc7m" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.342637 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2n5g\" (UniqueName: \"kubernetes.io/projected/b9c725b1-c96f-40f0-9642-a4e5f2e2588a-kube-api-access-t2n5g\") pod \"auto-csr-approver-29537886-7bc7m\" (UID: \"b9c725b1-c96f-40f0-9642-a4e5f2e2588a\") " pod="openshift-infra/auto-csr-approver-29537886-7bc7m" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.478515 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537886-7bc7m" Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.976854 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537886-7bc7m"] Feb 28 10:06:00 crc kubenswrapper[4996]: W0228 10:06:00.983502 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9c725b1_c96f_40f0_9642_a4e5f2e2588a.slice/crio-64d9acc8e8a1fd2445cf7d48a8c5de91a145aeeddef3b92d6f5425dd8203e5f0 WatchSource:0}: Error finding container 64d9acc8e8a1fd2445cf7d48a8c5de91a145aeeddef3b92d6f5425dd8203e5f0: Status 404 returned error can't find the container with id 64d9acc8e8a1fd2445cf7d48a8c5de91a145aeeddef3b92d6f5425dd8203e5f0 Feb 28 10:06:00 crc kubenswrapper[4996]: I0228 10:06:00.986766 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:06:01 crc kubenswrapper[4996]: I0228 10:06:01.137313 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537886-7bc7m" event={"ID":"b9c725b1-c96f-40f0-9642-a4e5f2e2588a","Type":"ContainerStarted","Data":"64d9acc8e8a1fd2445cf7d48a8c5de91a145aeeddef3b92d6f5425dd8203e5f0"} Feb 28 10:06:03 crc kubenswrapper[4996]: I0228 10:06:03.161788 4996 generic.go:334] "Generic (PLEG): container finished" podID="b9c725b1-c96f-40f0-9642-a4e5f2e2588a" containerID="3bab77a86579597387ea66c0fb79bb7ddba8102656b4187a9f0cebb4f528d744" exitCode=0 Feb 28 10:06:03 crc kubenswrapper[4996]: I0228 10:06:03.161905 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537886-7bc7m" event={"ID":"b9c725b1-c96f-40f0-9642-a4e5f2e2588a","Type":"ContainerDied","Data":"3bab77a86579597387ea66c0fb79bb7ddba8102656b4187a9f0cebb4f528d744"} Feb 28 10:06:04 crc kubenswrapper[4996]: I0228 10:06:04.885539 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537886-7bc7m" Feb 28 10:06:05 crc kubenswrapper[4996]: I0228 10:06:05.031952 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2n5g\" (UniqueName: \"kubernetes.io/projected/b9c725b1-c96f-40f0-9642-a4e5f2e2588a-kube-api-access-t2n5g\") pod \"b9c725b1-c96f-40f0-9642-a4e5f2e2588a\" (UID: \"b9c725b1-c96f-40f0-9642-a4e5f2e2588a\") " Feb 28 10:06:05 crc kubenswrapper[4996]: I0228 10:06:05.037427 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9c725b1-c96f-40f0-9642-a4e5f2e2588a-kube-api-access-t2n5g" (OuterVolumeSpecName: "kube-api-access-t2n5g") pod "b9c725b1-c96f-40f0-9642-a4e5f2e2588a" (UID: "b9c725b1-c96f-40f0-9642-a4e5f2e2588a"). InnerVolumeSpecName "kube-api-access-t2n5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:06:05 crc kubenswrapper[4996]: I0228 10:06:05.134639 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2n5g\" (UniqueName: \"kubernetes.io/projected/b9c725b1-c96f-40f0-9642-a4e5f2e2588a-kube-api-access-t2n5g\") on node \"crc\" DevicePath \"\"" Feb 28 10:06:05 crc kubenswrapper[4996]: I0228 10:06:05.192384 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537886-7bc7m" event={"ID":"b9c725b1-c96f-40f0-9642-a4e5f2e2588a","Type":"ContainerDied","Data":"64d9acc8e8a1fd2445cf7d48a8c5de91a145aeeddef3b92d6f5425dd8203e5f0"} Feb 28 10:06:05 crc kubenswrapper[4996]: I0228 10:06:05.192431 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64d9acc8e8a1fd2445cf7d48a8c5de91a145aeeddef3b92d6f5425dd8203e5f0" Feb 28 10:06:05 crc kubenswrapper[4996]: I0228 10:06:05.192436 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537886-7bc7m" Feb 28 10:06:05 crc kubenswrapper[4996]: I0228 10:06:05.961554 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537880-8twc2"] Feb 28 10:06:05 crc kubenswrapper[4996]: I0228 10:06:05.973587 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537880-8twc2"] Feb 28 10:06:06 crc kubenswrapper[4996]: I0228 10:06:06.033125 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:06:06 crc kubenswrapper[4996]: E0228 10:06:06.033556 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:06:07 crc kubenswrapper[4996]: I0228 10:06:07.047214 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c89340-1270-4c96-8dc8-70f2a0c5b8a5" path="/var/lib/kubelet/pods/a8c89340-1270-4c96-8dc8-70f2a0c5b8a5/volumes" Feb 28 10:06:19 crc kubenswrapper[4996]: I0228 10:06:19.033529 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:06:19 crc kubenswrapper[4996]: E0228 10:06:19.034585 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:06:33 crc kubenswrapper[4996]: I0228 10:06:33.033729 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:06:33 crc kubenswrapper[4996]: E0228 10:06:33.034576 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:06:45 crc kubenswrapper[4996]: I0228 10:06:45.037420 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:06:45 crc kubenswrapper[4996]: E0228 10:06:45.038651 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:06:47 crc kubenswrapper[4996]: I0228 10:06:47.181829 4996 scope.go:117] "RemoveContainer" containerID="bf2cb9d71a12eb59503c2b7d57f630f1c6537a683dac6c73f9c3bc1a3d6dd21b" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.667874 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hqkwt"] Feb 28 10:06:58 crc kubenswrapper[4996]: E0228 10:06:58.671058 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9c725b1-c96f-40f0-9642-a4e5f2e2588a" containerName="oc" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.671083 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9c725b1-c96f-40f0-9642-a4e5f2e2588a" containerName="oc" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.671405 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9c725b1-c96f-40f0-9642-a4e5f2e2588a" containerName="oc" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.672657 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.685706 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqkwt"] Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.709682 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-utilities\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.709791 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-catalog-content\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.709834 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg48m\" (UniqueName: \"kubernetes.io/projected/81acec9b-4c68-418b-a6a0-18569c7a1e7c-kube-api-access-dg48m\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.811830 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-utilities\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.812425 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-utilities\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.812422 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-catalog-content\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.812553 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg48m\" (UniqueName: \"kubernetes.io/projected/81acec9b-4c68-418b-a6a0-18569c7a1e7c-kube-api-access-dg48m\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.812856 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-catalog-content\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:58 crc kubenswrapper[4996]: I0228 10:06:58.839460 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg48m\" (UniqueName: \"kubernetes.io/projected/81acec9b-4c68-418b-a6a0-18569c7a1e7c-kube-api-access-dg48m\") pod \"certified-operators-hqkwt\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:59 crc kubenswrapper[4996]: I0228 10:06:59.014089 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:06:59 crc kubenswrapper[4996]: I0228 10:06:59.034665 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:06:59 crc kubenswrapper[4996]: E0228 10:06:59.034957 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:06:59 crc kubenswrapper[4996]: I0228 10:06:59.619573 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqkwt"] Feb 28 10:06:59 crc kubenswrapper[4996]: I0228 10:06:59.734796 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqkwt" event={"ID":"81acec9b-4c68-418b-a6a0-18569c7a1e7c","Type":"ContainerStarted","Data":"af62c730fc230f92c0209659542cb36d67e69651871d5adcd93b0d8f182b3eda"} Feb 28 10:07:00 crc kubenswrapper[4996]: I0228 10:07:00.746786 4996 generic.go:334] "Generic (PLEG): container finished" podID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerID="9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314" exitCode=0 Feb 28 10:07:00 crc kubenswrapper[4996]: I0228 10:07:00.747285 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqkwt" event={"ID":"81acec9b-4c68-418b-a6a0-18569c7a1e7c","Type":"ContainerDied","Data":"9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314"} Feb 28 10:07:01 crc kubenswrapper[4996]: I0228 10:07:01.765574 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqkwt" event={"ID":"81acec9b-4c68-418b-a6a0-18569c7a1e7c","Type":"ContainerStarted","Data":"b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a"} Feb 28 10:07:03 crc kubenswrapper[4996]: I0228 10:07:03.782952 4996 generic.go:334] "Generic (PLEG): container finished" podID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerID="b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a" exitCode=0 Feb 28 10:07:03 crc kubenswrapper[4996]: I0228 10:07:03.783105 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqkwt" event={"ID":"81acec9b-4c68-418b-a6a0-18569c7a1e7c","Type":"ContainerDied","Data":"b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a"} Feb 28 10:07:04 crc kubenswrapper[4996]: I0228 10:07:04.796289 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqkwt" event={"ID":"81acec9b-4c68-418b-a6a0-18569c7a1e7c","Type":"ContainerStarted","Data":"bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb"} Feb 28 10:07:04 crc kubenswrapper[4996]: I0228 10:07:04.816895 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hqkwt" podStartSLOduration=3.420391578 podStartE2EDuration="6.816881408s" podCreationTimestamp="2026-02-28 10:06:58 +0000 UTC" firstStartedPulling="2026-02-28 10:07:00.748759278 +0000 UTC m=+3984.439562109" lastFinishedPulling="2026-02-28 10:07:04.145249128 +0000 UTC m=+3987.836051939" observedRunningTime="2026-02-28 10:07:04.812360977 +0000 UTC m=+3988.503163798" watchObservedRunningTime="2026-02-28 10:07:04.816881408 +0000 UTC m=+3988.507684219" Feb 28 10:07:09 crc kubenswrapper[4996]: I0228 10:07:09.015105 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:07:09 crc kubenswrapper[4996]: I0228 10:07:09.016332 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:07:09 crc kubenswrapper[4996]: I0228 10:07:09.071846 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:07:09 crc kubenswrapper[4996]: I0228 10:07:09.883398 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:07:09 crc kubenswrapper[4996]: I0228 10:07:09.929109 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqkwt"] Feb 28 10:07:11 crc kubenswrapper[4996]: I0228 10:07:11.033987 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:07:11 crc kubenswrapper[4996]: E0228 10:07:11.035267 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:07:11 crc kubenswrapper[4996]: I0228 10:07:11.854234 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hqkwt" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerName="registry-server" containerID="cri-o://bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb" gracePeriod=2 Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.568813 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.697860 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-catalog-content\") pod \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.698028 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg48m\" (UniqueName: \"kubernetes.io/projected/81acec9b-4c68-418b-a6a0-18569c7a1e7c-kube-api-access-dg48m\") pod \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.698279 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-utilities\") pod \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\" (UID: \"81acec9b-4c68-418b-a6a0-18569c7a1e7c\") " Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.699727 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-utilities" (OuterVolumeSpecName: "utilities") pod "81acec9b-4c68-418b-a6a0-18569c7a1e7c" (UID: "81acec9b-4c68-418b-a6a0-18569c7a1e7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.706549 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81acec9b-4c68-418b-a6a0-18569c7a1e7c-kube-api-access-dg48m" (OuterVolumeSpecName: "kube-api-access-dg48m") pod "81acec9b-4c68-418b-a6a0-18569c7a1e7c" (UID: "81acec9b-4c68-418b-a6a0-18569c7a1e7c"). InnerVolumeSpecName "kube-api-access-dg48m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.758276 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81acec9b-4c68-418b-a6a0-18569c7a1e7c" (UID: "81acec9b-4c68-418b-a6a0-18569c7a1e7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.801452 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg48m\" (UniqueName: \"kubernetes.io/projected/81acec9b-4c68-418b-a6a0-18569c7a1e7c-kube-api-access-dg48m\") on node \"crc\" DevicePath \"\"" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.801508 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.801526 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81acec9b-4c68-418b-a6a0-18569c7a1e7c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.867707 4996 generic.go:334] "Generic (PLEG): container finished" podID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerID="bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb" exitCode=0 Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.867774 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqkwt" event={"ID":"81acec9b-4c68-418b-a6a0-18569c7a1e7c","Type":"ContainerDied","Data":"bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb"} Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.867817 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqkwt" event={"ID":"81acec9b-4c68-418b-a6a0-18569c7a1e7c","Type":"ContainerDied","Data":"af62c730fc230f92c0209659542cb36d67e69651871d5adcd93b0d8f182b3eda"} Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.867841 4996 scope.go:117] "RemoveContainer" containerID="bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.867864 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqkwt" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.891735 4996 scope.go:117] "RemoveContainer" containerID="b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.919934 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqkwt"] Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.930685 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hqkwt"] Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.937168 4996 scope.go:117] "RemoveContainer" containerID="9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.965265 4996 scope.go:117] "RemoveContainer" containerID="bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb" Feb 28 10:07:12 crc kubenswrapper[4996]: E0228 10:07:12.965656 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb\": container with ID starting with bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb not found: ID does not exist" containerID="bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.965685 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb"} err="failed to get container status \"bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb\": rpc error: code = NotFound desc = could not find container \"bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb\": container with ID starting with bf9d86dc31938ff923006e19eca10331d3de3c92de5db56c6fbd228cff2299cb not found: ID does not exist" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.965703 4996 scope.go:117] "RemoveContainer" containerID="b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a" Feb 28 10:07:12 crc kubenswrapper[4996]: E0228 10:07:12.966248 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a\": container with ID starting with b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a not found: ID does not exist" containerID="b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.966282 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a"} err="failed to get container status \"b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a\": rpc error: code = NotFound desc = could not find container \"b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a\": container with ID starting with b9a669fd39c123578d8e412b628ea1e20e31d963b24c2e7e8d7e15167574128a not found: ID does not exist" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.966298 4996 scope.go:117] "RemoveContainer" containerID="9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314" Feb 28 10:07:12 crc kubenswrapper[4996]: E0228 10:07:12.966548 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314\": container with ID starting with 9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314 not found: ID does not exist" containerID="9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314" Feb 28 10:07:12 crc kubenswrapper[4996]: I0228 10:07:12.966583 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314"} err="failed to get container status \"9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314\": rpc error: code = NotFound desc = could not find container \"9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314\": container with ID starting with 9a70b65c9430bb043fa86ac0e6d2efc8384e23cb2a2ff6508d978e5f9a002314 not found: ID does not exist" Feb 28 10:07:13 crc kubenswrapper[4996]: I0228 10:07:13.042570 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" path="/var/lib/kubelet/pods/81acec9b-4c68-418b-a6a0-18569c7a1e7c/volumes" Feb 28 10:07:22 crc kubenswrapper[4996]: I0228 10:07:22.033534 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:07:22 crc kubenswrapper[4996]: I0228 10:07:22.956296 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"1ef5b5acc6b6fa0b0f0596617327a741863647364cd8a14e093baf55fb7c0c6e"} Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.145919 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537888-bnc6c"] Feb 28 10:08:00 crc kubenswrapper[4996]: E0228 10:08:00.146951 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerName="extract-utilities" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.146971 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerName="extract-utilities" Feb 28 10:08:00 crc kubenswrapper[4996]: E0228 10:08:00.147030 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerName="extract-content" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.147040 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerName="extract-content" Feb 28 10:08:00 crc kubenswrapper[4996]: E0228 10:08:00.147053 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerName="registry-server" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.147062 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerName="registry-server" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.147299 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="81acec9b-4c68-418b-a6a0-18569c7a1e7c" containerName="registry-server" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.148192 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537888-bnc6c" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.151750 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.152071 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.152297 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.160815 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537888-bnc6c"] Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.306629 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87cf7\" (UniqueName: \"kubernetes.io/projected/8b8b77a9-41f4-4bed-b05e-d900efeeb16c-kube-api-access-87cf7\") pod \"auto-csr-approver-29537888-bnc6c\" (UID: \"8b8b77a9-41f4-4bed-b05e-d900efeeb16c\") " pod="openshift-infra/auto-csr-approver-29537888-bnc6c" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.408978 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87cf7\" (UniqueName: \"kubernetes.io/projected/8b8b77a9-41f4-4bed-b05e-d900efeeb16c-kube-api-access-87cf7\") pod \"auto-csr-approver-29537888-bnc6c\" (UID: \"8b8b77a9-41f4-4bed-b05e-d900efeeb16c\") " pod="openshift-infra/auto-csr-approver-29537888-bnc6c" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.434830 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87cf7\" (UniqueName: \"kubernetes.io/projected/8b8b77a9-41f4-4bed-b05e-d900efeeb16c-kube-api-access-87cf7\") pod \"auto-csr-approver-29537888-bnc6c\" (UID: \"8b8b77a9-41f4-4bed-b05e-d900efeeb16c\") " pod="openshift-infra/auto-csr-approver-29537888-bnc6c" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.470208 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537888-bnc6c" Feb 28 10:08:00 crc kubenswrapper[4996]: I0228 10:08:00.946603 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537888-bnc6c"] Feb 28 10:08:01 crc kubenswrapper[4996]: I0228 10:08:01.489146 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537888-bnc6c" event={"ID":"8b8b77a9-41f4-4bed-b05e-d900efeeb16c","Type":"ContainerStarted","Data":"3101dbded92adc581c93642b1475e1511b0c3c7eabcaa12d8faaec49d5ec0c39"} Feb 28 10:08:02 crc kubenswrapper[4996]: I0228 10:08:02.502560 4996 generic.go:334] "Generic (PLEG): container finished" podID="8b8b77a9-41f4-4bed-b05e-d900efeeb16c" containerID="875d1deaac5c5ea131f9ddd2ced96287883ad5c2ef3565b657b156023a9751da" exitCode=0 Feb 28 10:08:02 crc kubenswrapper[4996]: I0228 10:08:02.502662 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537888-bnc6c" event={"ID":"8b8b77a9-41f4-4bed-b05e-d900efeeb16c","Type":"ContainerDied","Data":"875d1deaac5c5ea131f9ddd2ced96287883ad5c2ef3565b657b156023a9751da"} Feb 28 10:08:04 crc kubenswrapper[4996]: I0228 10:08:04.056412 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537888-bnc6c" Feb 28 10:08:04 crc kubenswrapper[4996]: I0228 10:08:04.183070 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87cf7\" (UniqueName: \"kubernetes.io/projected/8b8b77a9-41f4-4bed-b05e-d900efeeb16c-kube-api-access-87cf7\") pod \"8b8b77a9-41f4-4bed-b05e-d900efeeb16c\" (UID: \"8b8b77a9-41f4-4bed-b05e-d900efeeb16c\") " Feb 28 10:08:04 crc kubenswrapper[4996]: I0228 10:08:04.188647 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8b77a9-41f4-4bed-b05e-d900efeeb16c-kube-api-access-87cf7" (OuterVolumeSpecName: "kube-api-access-87cf7") pod "8b8b77a9-41f4-4bed-b05e-d900efeeb16c" (UID: "8b8b77a9-41f4-4bed-b05e-d900efeeb16c"). InnerVolumeSpecName "kube-api-access-87cf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:08:04 crc kubenswrapper[4996]: I0228 10:08:04.285686 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87cf7\" (UniqueName: \"kubernetes.io/projected/8b8b77a9-41f4-4bed-b05e-d900efeeb16c-kube-api-access-87cf7\") on node \"crc\" DevicePath \"\"" Feb 28 10:08:04 crc kubenswrapper[4996]: I0228 10:08:04.521600 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537888-bnc6c" event={"ID":"8b8b77a9-41f4-4bed-b05e-d900efeeb16c","Type":"ContainerDied","Data":"3101dbded92adc581c93642b1475e1511b0c3c7eabcaa12d8faaec49d5ec0c39"} Feb 28 10:08:04 crc kubenswrapper[4996]: I0228 10:08:04.521640 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3101dbded92adc581c93642b1475e1511b0c3c7eabcaa12d8faaec49d5ec0c39" Feb 28 10:08:04 crc kubenswrapper[4996]: I0228 10:08:04.521690 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537888-bnc6c" Feb 28 10:08:05 crc kubenswrapper[4996]: I0228 10:08:05.146917 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537882-bc9k8"] Feb 28 10:08:05 crc kubenswrapper[4996]: I0228 10:08:05.158338 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537882-bc9k8"] Feb 28 10:08:07 crc kubenswrapper[4996]: I0228 10:08:07.050097 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235311a5-1051-4a97-9523-ee0de1a5f461" path="/var/lib/kubelet/pods/235311a5-1051-4a97-9523-ee0de1a5f461/volumes" Feb 28 10:08:47 crc kubenswrapper[4996]: I0228 10:08:47.308405 4996 scope.go:117] "RemoveContainer" containerID="e2cfc4b485654c66327f216132e6bab7d4e3fc8f554d53679ae77e17c908acb0" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.202929 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vnnct"] Feb 28 10:09:30 crc kubenswrapper[4996]: E0228 10:09:30.204096 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8b77a9-41f4-4bed-b05e-d900efeeb16c" containerName="oc" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.204113 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8b77a9-41f4-4bed-b05e-d900efeeb16c" containerName="oc" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.204322 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8b77a9-41f4-4bed-b05e-d900efeeb16c" containerName="oc" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.205924 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.218733 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnnct"] Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.287757 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-catalog-content\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.287902 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-utilities\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.287957 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcq5l\" (UniqueName: \"kubernetes.io/projected/2a944275-9aaf-422d-ba6c-35bcfa8609e6-kube-api-access-vcq5l\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.389678 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-utilities\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.391726 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcq5l\" (UniqueName: \"kubernetes.io/projected/2a944275-9aaf-422d-ba6c-35bcfa8609e6-kube-api-access-vcq5l\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.391894 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-catalog-content\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.392639 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-catalog-content\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.392918 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-utilities\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.437923 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcq5l\" (UniqueName: \"kubernetes.io/projected/2a944275-9aaf-422d-ba6c-35bcfa8609e6-kube-api-access-vcq5l\") pod \"redhat-marketplace-vnnct\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:30 crc kubenswrapper[4996]: I0228 10:09:30.540109 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:31 crc kubenswrapper[4996]: I0228 10:09:31.075588 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnnct"] Feb 28 10:09:31 crc kubenswrapper[4996]: I0228 10:09:31.303775 4996 generic.go:334] "Generic (PLEG): container finished" podID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerID="9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261" exitCode=0 Feb 28 10:09:31 crc kubenswrapper[4996]: I0228 10:09:31.304685 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnnct" event={"ID":"2a944275-9aaf-422d-ba6c-35bcfa8609e6","Type":"ContainerDied","Data":"9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261"} Feb 28 10:09:31 crc kubenswrapper[4996]: I0228 10:09:31.304735 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnnct" event={"ID":"2a944275-9aaf-422d-ba6c-35bcfa8609e6","Type":"ContainerStarted","Data":"088b16d9c9a0359d27c36f3395e58a5aa39cd3c989702d650b839ce34d1f8e01"} Feb 28 10:09:32 crc kubenswrapper[4996]: I0228 10:09:32.313882 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnnct" event={"ID":"2a944275-9aaf-422d-ba6c-35bcfa8609e6","Type":"ContainerStarted","Data":"affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3"} Feb 28 10:09:34 crc kubenswrapper[4996]: I0228 10:09:34.334927 4996 generic.go:334] "Generic (PLEG): container finished" podID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerID="affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3" exitCode=0 Feb 28 10:09:34 crc kubenswrapper[4996]: I0228 10:09:34.335184 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnnct" event={"ID":"2a944275-9aaf-422d-ba6c-35bcfa8609e6","Type":"ContainerDied","Data":"affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3"} Feb 28 10:09:35 crc kubenswrapper[4996]: I0228 10:09:35.346614 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnnct" event={"ID":"2a944275-9aaf-422d-ba6c-35bcfa8609e6","Type":"ContainerStarted","Data":"9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1"} Feb 28 10:09:35 crc kubenswrapper[4996]: I0228 10:09:35.375666 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vnnct" podStartSLOduration=1.964876125 podStartE2EDuration="5.375645033s" podCreationTimestamp="2026-02-28 10:09:30 +0000 UTC" firstStartedPulling="2026-02-28 10:09:31.30573625 +0000 UTC m=+4134.996539061" lastFinishedPulling="2026-02-28 10:09:34.716505168 +0000 UTC m=+4138.407307969" observedRunningTime="2026-02-28 10:09:35.372734222 +0000 UTC m=+4139.063537043" watchObservedRunningTime="2026-02-28 10:09:35.375645033 +0000 UTC m=+4139.066447844" Feb 28 10:09:40 crc kubenswrapper[4996]: I0228 10:09:40.540347 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:40 crc kubenswrapper[4996]: I0228 10:09:40.541206 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:40 crc kubenswrapper[4996]: I0228 10:09:40.605146 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:41 crc kubenswrapper[4996]: I0228 10:09:41.461670 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:41 crc kubenswrapper[4996]: I0228 10:09:41.531615 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnnct"] Feb 28 10:09:42 crc kubenswrapper[4996]: I0228 10:09:42.249203 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:09:42 crc kubenswrapper[4996]: I0228 10:09:42.249485 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:09:43 crc kubenswrapper[4996]: I0228 10:09:43.411206 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vnnct" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerName="registry-server" containerID="cri-o://9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1" gracePeriod=2 Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.122491 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.188336 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-catalog-content\") pod \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.188415 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-utilities\") pod \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.188632 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcq5l\" (UniqueName: \"kubernetes.io/projected/2a944275-9aaf-422d-ba6c-35bcfa8609e6-kube-api-access-vcq5l\") pod \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\" (UID: \"2a944275-9aaf-422d-ba6c-35bcfa8609e6\") " Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.189669 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-utilities" (OuterVolumeSpecName: "utilities") pod "2a944275-9aaf-422d-ba6c-35bcfa8609e6" (UID: "2a944275-9aaf-422d-ba6c-35bcfa8609e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.195468 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a944275-9aaf-422d-ba6c-35bcfa8609e6-kube-api-access-vcq5l" (OuterVolumeSpecName: "kube-api-access-vcq5l") pod "2a944275-9aaf-422d-ba6c-35bcfa8609e6" (UID: "2a944275-9aaf-422d-ba6c-35bcfa8609e6"). InnerVolumeSpecName "kube-api-access-vcq5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.236436 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a944275-9aaf-422d-ba6c-35bcfa8609e6" (UID: "2a944275-9aaf-422d-ba6c-35bcfa8609e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.291459 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.291498 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcq5l\" (UniqueName: \"kubernetes.io/projected/2a944275-9aaf-422d-ba6c-35bcfa8609e6-kube-api-access-vcq5l\") on node \"crc\" DevicePath \"\"" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.291514 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a944275-9aaf-422d-ba6c-35bcfa8609e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.424343 4996 generic.go:334] "Generic (PLEG): container finished" podID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerID="9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1" exitCode=0 Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.424390 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnnct" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.424394 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnnct" event={"ID":"2a944275-9aaf-422d-ba6c-35bcfa8609e6","Type":"ContainerDied","Data":"9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1"} Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.424424 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnnct" event={"ID":"2a944275-9aaf-422d-ba6c-35bcfa8609e6","Type":"ContainerDied","Data":"088b16d9c9a0359d27c36f3395e58a5aa39cd3c989702d650b839ce34d1f8e01"} Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.424444 4996 scope.go:117] "RemoveContainer" containerID="9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.477246 4996 scope.go:117] "RemoveContainer" containerID="affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.481901 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnnct"] Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.494745 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnnct"] Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.505339 4996 scope.go:117] "RemoveContainer" containerID="9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.564971 4996 scope.go:117] "RemoveContainer" containerID="9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1" Feb 28 10:09:44 crc kubenswrapper[4996]: E0228 10:09:44.566046 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1\": container with ID starting with 9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1 not found: ID does not exist" containerID="9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.566112 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1"} err="failed to get container status \"9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1\": rpc error: code = NotFound desc = could not find container \"9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1\": container with ID starting with 9205cb0d64e4da9453b5587d10f029bc66bab655158a29d72a8854a7b316e4b1 not found: ID does not exist" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.566149 4996 scope.go:117] "RemoveContainer" containerID="affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3" Feb 28 10:09:44 crc kubenswrapper[4996]: E0228 10:09:44.566919 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3\": container with ID starting with affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3 not found: ID does not exist" containerID="affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.566950 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3"} err="failed to get container status \"affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3\": rpc error: code = NotFound desc = could not find container \"affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3\": container with ID starting with affda82ae27c733b627b76328613d0dbaf2ded7f04d38973a6f293d185e5eed3 not found: ID does not exist" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.566965 4996 scope.go:117] "RemoveContainer" containerID="9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261" Feb 28 10:09:44 crc kubenswrapper[4996]: E0228 10:09:44.567498 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261\": container with ID starting with 9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261 not found: ID does not exist" containerID="9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261" Feb 28 10:09:44 crc kubenswrapper[4996]: I0228 10:09:44.567556 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261"} err="failed to get container status \"9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261\": rpc error: code = NotFound desc = could not find container \"9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261\": container with ID starting with 9eb77802939fad4d6ebe330570a0238784c04abd5ff8b4d82710b96fbc909261 not found: ID does not exist" Feb 28 10:09:45 crc kubenswrapper[4996]: I0228 10:09:45.046575 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" path="/var/lib/kubelet/pods/2a944275-9aaf-422d-ba6c-35bcfa8609e6/volumes" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.139428 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537890-k688m"] Feb 28 10:10:00 crc kubenswrapper[4996]: E0228 10:10:00.140384 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerName="extract-utilities" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.140401 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerName="extract-utilities" Feb 28 10:10:00 crc kubenswrapper[4996]: E0228 10:10:00.140434 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerName="registry-server" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.140444 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerName="registry-server" Feb 28 10:10:00 crc kubenswrapper[4996]: E0228 10:10:00.140467 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerName="extract-content" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.140475 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerName="extract-content" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.140655 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a944275-9aaf-422d-ba6c-35bcfa8609e6" containerName="registry-server" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.141263 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537890-k688m" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.143864 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.144159 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.144670 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.148870 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537890-k688m"] Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.219490 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x78v9\" (UniqueName: \"kubernetes.io/projected/9a7eecd3-0cb7-4b09-8014-414531ec452c-kube-api-access-x78v9\") pod \"auto-csr-approver-29537890-k688m\" (UID: \"9a7eecd3-0cb7-4b09-8014-414531ec452c\") " pod="openshift-infra/auto-csr-approver-29537890-k688m" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.322647 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x78v9\" (UniqueName: \"kubernetes.io/projected/9a7eecd3-0cb7-4b09-8014-414531ec452c-kube-api-access-x78v9\") pod \"auto-csr-approver-29537890-k688m\" (UID: \"9a7eecd3-0cb7-4b09-8014-414531ec452c\") " pod="openshift-infra/auto-csr-approver-29537890-k688m" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.345703 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x78v9\" (UniqueName: \"kubernetes.io/projected/9a7eecd3-0cb7-4b09-8014-414531ec452c-kube-api-access-x78v9\") pod \"auto-csr-approver-29537890-k688m\" (UID: \"9a7eecd3-0cb7-4b09-8014-414531ec452c\") " pod="openshift-infra/auto-csr-approver-29537890-k688m" Feb 28 10:10:00 crc kubenswrapper[4996]: I0228 10:10:00.464848 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537890-k688m" Feb 28 10:10:01 crc kubenswrapper[4996]: I0228 10:10:01.364396 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537890-k688m"] Feb 28 10:10:01 crc kubenswrapper[4996]: I0228 10:10:01.601038 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537890-k688m" event={"ID":"9a7eecd3-0cb7-4b09-8014-414531ec452c","Type":"ContainerStarted","Data":"6b457bc74d6021b7b7eeebddc28189733c3cf43fbc583c24fb6ad2c0cd8ac701"} Feb 28 10:10:03 crc kubenswrapper[4996]: I0228 10:10:03.617564 4996 generic.go:334] "Generic (PLEG): container finished" podID="9a7eecd3-0cb7-4b09-8014-414531ec452c" containerID="9deac47e7dedca0ac9be531482901965a34a83ee74e220381eba97c801111312" exitCode=0 Feb 28 10:10:03 crc kubenswrapper[4996]: I0228 10:10:03.617952 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537890-k688m" event={"ID":"9a7eecd3-0cb7-4b09-8014-414531ec452c","Type":"ContainerDied","Data":"9deac47e7dedca0ac9be531482901965a34a83ee74e220381eba97c801111312"} Feb 28 10:10:05 crc kubenswrapper[4996]: I0228 10:10:05.268288 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537890-k688m" Feb 28 10:10:05 crc kubenswrapper[4996]: I0228 10:10:05.325528 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x78v9\" (UniqueName: \"kubernetes.io/projected/9a7eecd3-0cb7-4b09-8014-414531ec452c-kube-api-access-x78v9\") pod \"9a7eecd3-0cb7-4b09-8014-414531ec452c\" (UID: \"9a7eecd3-0cb7-4b09-8014-414531ec452c\") " Feb 28 10:10:05 crc kubenswrapper[4996]: I0228 10:10:05.331607 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7eecd3-0cb7-4b09-8014-414531ec452c-kube-api-access-x78v9" (OuterVolumeSpecName: "kube-api-access-x78v9") pod "9a7eecd3-0cb7-4b09-8014-414531ec452c" (UID: "9a7eecd3-0cb7-4b09-8014-414531ec452c"). InnerVolumeSpecName "kube-api-access-x78v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:10:05 crc kubenswrapper[4996]: I0228 10:10:05.428125 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x78v9\" (UniqueName: \"kubernetes.io/projected/9a7eecd3-0cb7-4b09-8014-414531ec452c-kube-api-access-x78v9\") on node \"crc\" DevicePath \"\"" Feb 28 10:10:05 crc kubenswrapper[4996]: I0228 10:10:05.636177 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537890-k688m" event={"ID":"9a7eecd3-0cb7-4b09-8014-414531ec452c","Type":"ContainerDied","Data":"6b457bc74d6021b7b7eeebddc28189733c3cf43fbc583c24fb6ad2c0cd8ac701"} Feb 28 10:10:05 crc kubenswrapper[4996]: I0228 10:10:05.636229 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537890-k688m" Feb 28 10:10:05 crc kubenswrapper[4996]: I0228 10:10:05.636230 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b457bc74d6021b7b7eeebddc28189733c3cf43fbc583c24fb6ad2c0cd8ac701" Feb 28 10:10:06 crc kubenswrapper[4996]: I0228 10:10:06.348358 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537884-cfpmr"] Feb 28 10:10:06 crc kubenswrapper[4996]: I0228 10:10:06.357661 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537884-cfpmr"] Feb 28 10:10:07 crc kubenswrapper[4996]: I0228 10:10:07.055101 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0c33c9-4a25-4abd-b05b-4315c7292773" path="/var/lib/kubelet/pods/ae0c33c9-4a25-4abd-b05b-4315c7292773/volumes" Feb 28 10:10:12 crc kubenswrapper[4996]: I0228 10:10:12.248902 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:10:12 crc kubenswrapper[4996]: I0228 10:10:12.249600 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:10:26 crc kubenswrapper[4996]: I0228 10:10:26.996895 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4g2t4"] Feb 28 10:10:26 crc kubenswrapper[4996]: E0228 10:10:26.999155 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7eecd3-0cb7-4b09-8014-414531ec452c" containerName="oc" Feb 28 10:10:26 crc kubenswrapper[4996]: I0228 10:10:26.999178 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7eecd3-0cb7-4b09-8014-414531ec452c" containerName="oc" Feb 28 10:10:26 crc kubenswrapper[4996]: I0228 10:10:26.999461 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7eecd3-0cb7-4b09-8014-414531ec452c" containerName="oc" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.001620 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.007990 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4g2t4"] Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.087333 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwm9q\" (UniqueName: \"kubernetes.io/projected/cae1bad3-e70b-4b97-89e3-cd0afade820b-kube-api-access-nwm9q\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.087460 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-catalog-content\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.087494 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-utilities\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.190131 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwm9q\" (UniqueName: \"kubernetes.io/projected/cae1bad3-e70b-4b97-89e3-cd0afade820b-kube-api-access-nwm9q\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.190265 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-catalog-content\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.190299 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-utilities\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.190859 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-catalog-content\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.190887 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-utilities\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.210427 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwm9q\" (UniqueName: \"kubernetes.io/projected/cae1bad3-e70b-4b97-89e3-cd0afade820b-kube-api-access-nwm9q\") pod \"redhat-operators-4g2t4\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.335233 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.798793 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4g2t4"] Feb 28 10:10:27 crc kubenswrapper[4996]: I0228 10:10:27.830201 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g2t4" event={"ID":"cae1bad3-e70b-4b97-89e3-cd0afade820b","Type":"ContainerStarted","Data":"4258e63bf56725c6521d6aa336b178f53764a31bfc311b71ffa8796cfe1a3a3a"} Feb 28 10:10:28 crc kubenswrapper[4996]: I0228 10:10:28.841757 4996 generic.go:334] "Generic (PLEG): container finished" podID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerID="4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8" exitCode=0 Feb 28 10:10:28 crc kubenswrapper[4996]: I0228 10:10:28.842063 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g2t4" event={"ID":"cae1bad3-e70b-4b97-89e3-cd0afade820b","Type":"ContainerDied","Data":"4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8"} Feb 28 10:10:29 crc kubenswrapper[4996]: I0228 10:10:29.851492 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g2t4" event={"ID":"cae1bad3-e70b-4b97-89e3-cd0afade820b","Type":"ContainerStarted","Data":"739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030"} Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.373321 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x85kz"] Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.375501 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.386782 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x85kz"] Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.497844 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-catalog-content\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.497889 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-utilities\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.497952 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsmp\" (UniqueName: \"kubernetes.io/projected/42c2df09-f11c-40e8-854d-97e8f9e58b9a-kube-api-access-qzsmp\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.600831 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-catalog-content\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.601277 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-utilities\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.601464 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzsmp\" (UniqueName: \"kubernetes.io/projected/42c2df09-f11c-40e8-854d-97e8f9e58b9a-kube-api-access-qzsmp\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.601549 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-catalog-content\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.601756 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-utilities\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.625028 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzsmp\" (UniqueName: \"kubernetes.io/projected/42c2df09-f11c-40e8-854d-97e8f9e58b9a-kube-api-access-qzsmp\") pod \"community-operators-x85kz\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:32 crc kubenswrapper[4996]: I0228 10:10:32.700116 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:34 crc kubenswrapper[4996]: I0228 10:10:34.091655 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x85kz"] Feb 28 10:10:34 crc kubenswrapper[4996]: I0228 10:10:34.911303 4996 generic.go:334] "Generic (PLEG): container finished" podID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerID="b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0" exitCode=0 Feb 28 10:10:34 crc kubenswrapper[4996]: I0228 10:10:34.911667 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x85kz" event={"ID":"42c2df09-f11c-40e8-854d-97e8f9e58b9a","Type":"ContainerDied","Data":"b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0"} Feb 28 10:10:34 crc kubenswrapper[4996]: I0228 10:10:34.911704 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x85kz" event={"ID":"42c2df09-f11c-40e8-854d-97e8f9e58b9a","Type":"ContainerStarted","Data":"d63a2efa9fddba106e3e58bdfe9e5ce4a0fd4d41c2691c0f9448513e35968968"} Feb 28 10:10:34 crc kubenswrapper[4996]: I0228 10:10:34.916420 4996 generic.go:334] "Generic (PLEG): container finished" podID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerID="739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030" exitCode=0 Feb 28 10:10:34 crc kubenswrapper[4996]: I0228 10:10:34.916497 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g2t4" event={"ID":"cae1bad3-e70b-4b97-89e3-cd0afade820b","Type":"ContainerDied","Data":"739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030"} Feb 28 10:10:35 crc kubenswrapper[4996]: I0228 10:10:35.926877 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g2t4" event={"ID":"cae1bad3-e70b-4b97-89e3-cd0afade820b","Type":"ContainerStarted","Data":"6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539"} Feb 28 10:10:35 crc kubenswrapper[4996]: I0228 10:10:35.950167 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4g2t4" podStartSLOduration=3.439851198 podStartE2EDuration="9.95014818s" podCreationTimestamp="2026-02-28 10:10:26 +0000 UTC" firstStartedPulling="2026-02-28 10:10:28.844071265 +0000 UTC m=+4192.534874076" lastFinishedPulling="2026-02-28 10:10:35.354368247 +0000 UTC m=+4199.045171058" observedRunningTime="2026-02-28 10:10:35.943241451 +0000 UTC m=+4199.634044272" watchObservedRunningTime="2026-02-28 10:10:35.95014818 +0000 UTC m=+4199.640950991" Feb 28 10:10:36 crc kubenswrapper[4996]: I0228 10:10:36.937742 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x85kz" event={"ID":"42c2df09-f11c-40e8-854d-97e8f9e58b9a","Type":"ContainerStarted","Data":"d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2"} Feb 28 10:10:37 crc kubenswrapper[4996]: I0228 10:10:37.335360 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:37 crc kubenswrapper[4996]: I0228 10:10:37.335610 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:37 crc kubenswrapper[4996]: I0228 10:10:37.949941 4996 generic.go:334] "Generic (PLEG): container finished" podID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerID="d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2" exitCode=0 Feb 28 10:10:37 crc kubenswrapper[4996]: I0228 10:10:37.950036 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x85kz" event={"ID":"42c2df09-f11c-40e8-854d-97e8f9e58b9a","Type":"ContainerDied","Data":"d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2"} Feb 28 10:10:38 crc kubenswrapper[4996]: I0228 10:10:38.388703 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4g2t4" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="registry-server" probeResult="failure" output=< Feb 28 10:10:38 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:10:38 crc kubenswrapper[4996]: > Feb 28 10:10:38 crc kubenswrapper[4996]: I0228 10:10:38.959749 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x85kz" event={"ID":"42c2df09-f11c-40e8-854d-97e8f9e58b9a","Type":"ContainerStarted","Data":"3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86"} Feb 28 10:10:38 crc kubenswrapper[4996]: I0228 10:10:38.989300 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x85kz" podStartSLOduration=3.549832118 podStartE2EDuration="6.989286354s" podCreationTimestamp="2026-02-28 10:10:32 +0000 UTC" firstStartedPulling="2026-02-28 10:10:34.915731064 +0000 UTC m=+4198.606533915" lastFinishedPulling="2026-02-28 10:10:38.35518534 +0000 UTC m=+4202.045988151" observedRunningTime="2026-02-28 10:10:38.985676487 +0000 UTC m=+4202.676479318" watchObservedRunningTime="2026-02-28 10:10:38.989286354 +0000 UTC m=+4202.680089165" Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.249424 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.249740 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.249795 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.250581 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ef5b5acc6b6fa0b0f0596617327a741863647364cd8a14e093baf55fb7c0c6e"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.250629 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://1ef5b5acc6b6fa0b0f0596617327a741863647364cd8a14e093baf55fb7c0c6e" gracePeriod=600 Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.700791 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.701126 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.752861 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.994758 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="1ef5b5acc6b6fa0b0f0596617327a741863647364cd8a14e093baf55fb7c0c6e" exitCode=0 Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.994802 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"1ef5b5acc6b6fa0b0f0596617327a741863647364cd8a14e093baf55fb7c0c6e"} Feb 28 10:10:42 crc kubenswrapper[4996]: I0228 10:10:42.994876 4996 scope.go:117] "RemoveContainer" containerID="c607348f25f1090d066b152455104945e23cea3dc1c814f990611766fdf6327a" Feb 28 10:10:44 crc kubenswrapper[4996]: I0228 10:10:44.010545 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18"} Feb 28 10:10:47 crc kubenswrapper[4996]: I0228 10:10:47.423767 4996 scope.go:117] "RemoveContainer" containerID="8ccd1577f3f869a783b680332035be246c55184bfadb335e35684de5ccb16378" Feb 28 10:10:48 crc kubenswrapper[4996]: I0228 10:10:48.391514 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4g2t4" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="registry-server" probeResult="failure" output=< Feb 28 10:10:48 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:10:48 crc kubenswrapper[4996]: > Feb 28 10:10:52 crc kubenswrapper[4996]: I0228 10:10:52.755736 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:52 crc kubenswrapper[4996]: I0228 10:10:52.803130 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x85kz"] Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.104136 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x85kz" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerName="registry-server" containerID="cri-o://3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86" gracePeriod=2 Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.814825 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.855522 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-catalog-content\") pod \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.855636 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-utilities\") pod \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.855680 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzsmp\" (UniqueName: \"kubernetes.io/projected/42c2df09-f11c-40e8-854d-97e8f9e58b9a-kube-api-access-qzsmp\") pod \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\" (UID: \"42c2df09-f11c-40e8-854d-97e8f9e58b9a\") " Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.856462 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-utilities" (OuterVolumeSpecName: "utilities") pod "42c2df09-f11c-40e8-854d-97e8f9e58b9a" (UID: "42c2df09-f11c-40e8-854d-97e8f9e58b9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.862084 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c2df09-f11c-40e8-854d-97e8f9e58b9a-kube-api-access-qzsmp" (OuterVolumeSpecName: "kube-api-access-qzsmp") pod "42c2df09-f11c-40e8-854d-97e8f9e58b9a" (UID: "42c2df09-f11c-40e8-854d-97e8f9e58b9a"). InnerVolumeSpecName "kube-api-access-qzsmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.915427 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42c2df09-f11c-40e8-854d-97e8f9e58b9a" (UID: "42c2df09-f11c-40e8-854d-97e8f9e58b9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.958895 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.959268 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42c2df09-f11c-40e8-854d-97e8f9e58b9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:10:53 crc kubenswrapper[4996]: I0228 10:10:53.959279 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzsmp\" (UniqueName: \"kubernetes.io/projected/42c2df09-f11c-40e8-854d-97e8f9e58b9a-kube-api-access-qzsmp\") on node \"crc\" DevicePath \"\"" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.113433 4996 generic.go:334] "Generic (PLEG): container finished" podID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerID="3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86" exitCode=0 Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.113471 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x85kz" event={"ID":"42c2df09-f11c-40e8-854d-97e8f9e58b9a","Type":"ContainerDied","Data":"3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86"} Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.113496 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x85kz" event={"ID":"42c2df09-f11c-40e8-854d-97e8f9e58b9a","Type":"ContainerDied","Data":"d63a2efa9fddba106e3e58bdfe9e5ce4a0fd4d41c2691c0f9448513e35968968"} Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.113512 4996 scope.go:117] "RemoveContainer" containerID="3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.113613 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x85kz" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.130495 4996 scope.go:117] "RemoveContainer" containerID="d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.146306 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x85kz"] Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.157586 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x85kz"] Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.732829 4996 scope.go:117] "RemoveContainer" containerID="b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.875492 4996 scope.go:117] "RemoveContainer" containerID="3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86" Feb 28 10:10:54 crc kubenswrapper[4996]: E0228 10:10:54.876140 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86\": container with ID starting with 3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86 not found: ID does not exist" containerID="3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.876197 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86"} err="failed to get container status \"3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86\": rpc error: code = NotFound desc = could not find container \"3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86\": container with ID starting with 3dd606838be07c415d5c3ea2a72bda8fadffd84f088dfdca4809deec0270cb86 not found: ID does not exist" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.876240 4996 scope.go:117] "RemoveContainer" containerID="d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2" Feb 28 10:10:54 crc kubenswrapper[4996]: E0228 10:10:54.876690 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2\": container with ID starting with d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2 not found: ID does not exist" containerID="d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.876742 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2"} err="failed to get container status \"d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2\": rpc error: code = NotFound desc = could not find container \"d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2\": container with ID starting with d294e62ec1cd3f5659c499e126c992ef4b34b5a405a63173a7149d3f2767a9b2 not found: ID does not exist" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.876773 4996 scope.go:117] "RemoveContainer" containerID="b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0" Feb 28 10:10:54 crc kubenswrapper[4996]: E0228 10:10:54.877235 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0\": container with ID starting with b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0 not found: ID does not exist" containerID="b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0" Feb 28 10:10:54 crc kubenswrapper[4996]: I0228 10:10:54.877268 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0"} err="failed to get container status \"b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0\": rpc error: code = NotFound desc = could not find container \"b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0\": container with ID starting with b0bcecc72c517ad02ad7bd925b024dbf594f4539965f6768c1fa1d92d720c7a0 not found: ID does not exist" Feb 28 10:10:55 crc kubenswrapper[4996]: I0228 10:10:55.052281 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" path="/var/lib/kubelet/pods/42c2df09-f11c-40e8-854d-97e8f9e58b9a/volumes" Feb 28 10:10:57 crc kubenswrapper[4996]: I0228 10:10:57.379125 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:57 crc kubenswrapper[4996]: I0228 10:10:57.424611 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:58 crc kubenswrapper[4996]: I0228 10:10:58.389489 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4g2t4"] Feb 28 10:10:59 crc kubenswrapper[4996]: I0228 10:10:59.164950 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4g2t4" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="registry-server" containerID="cri-o://6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539" gracePeriod=2 Feb 28 10:10:59 crc kubenswrapper[4996]: I0228 10:10:59.868363 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:10:59 crc kubenswrapper[4996]: I0228 10:10:59.977337 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-utilities\") pod \"cae1bad3-e70b-4b97-89e3-cd0afade820b\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " Feb 28 10:10:59 crc kubenswrapper[4996]: I0228 10:10:59.977483 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-catalog-content\") pod \"cae1bad3-e70b-4b97-89e3-cd0afade820b\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " Feb 28 10:10:59 crc kubenswrapper[4996]: I0228 10:10:59.977526 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwm9q\" (UniqueName: \"kubernetes.io/projected/cae1bad3-e70b-4b97-89e3-cd0afade820b-kube-api-access-nwm9q\") pod \"cae1bad3-e70b-4b97-89e3-cd0afade820b\" (UID: \"cae1bad3-e70b-4b97-89e3-cd0afade820b\") " Feb 28 10:10:59 crc kubenswrapper[4996]: I0228 10:10:59.978872 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-utilities" (OuterVolumeSpecName: "utilities") pod "cae1bad3-e70b-4b97-89e3-cd0afade820b" (UID: "cae1bad3-e70b-4b97-89e3-cd0afade820b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:10:59 crc kubenswrapper[4996]: I0228 10:10:59.989110 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae1bad3-e70b-4b97-89e3-cd0afade820b-kube-api-access-nwm9q" (OuterVolumeSpecName: "kube-api-access-nwm9q") pod "cae1bad3-e70b-4b97-89e3-cd0afade820b" (UID: "cae1bad3-e70b-4b97-89e3-cd0afade820b"). InnerVolumeSpecName "kube-api-access-nwm9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.080604 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwm9q\" (UniqueName: \"kubernetes.io/projected/cae1bad3-e70b-4b97-89e3-cd0afade820b-kube-api-access-nwm9q\") on node \"crc\" DevicePath \"\"" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.080901 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.105627 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cae1bad3-e70b-4b97-89e3-cd0afade820b" (UID: "cae1bad3-e70b-4b97-89e3-cd0afade820b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.175254 4996 generic.go:334] "Generic (PLEG): container finished" podID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerID="6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539" exitCode=0 Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.175301 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g2t4" event={"ID":"cae1bad3-e70b-4b97-89e3-cd0afade820b","Type":"ContainerDied","Data":"6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539"} Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.175360 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4g2t4" event={"ID":"cae1bad3-e70b-4b97-89e3-cd0afade820b","Type":"ContainerDied","Data":"4258e63bf56725c6521d6aa336b178f53764a31bfc311b71ffa8796cfe1a3a3a"} Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.175387 4996 scope.go:117] "RemoveContainer" containerID="6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.175676 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4g2t4" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.182523 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cae1bad3-e70b-4b97-89e3-cd0afade820b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.196226 4996 scope.go:117] "RemoveContainer" containerID="739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.224734 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4g2t4"] Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.234115 4996 scope.go:117] "RemoveContainer" containerID="4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.236195 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4g2t4"] Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.270650 4996 scope.go:117] "RemoveContainer" containerID="6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539" Feb 28 10:11:00 crc kubenswrapper[4996]: E0228 10:11:00.271593 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539\": container with ID starting with 6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539 not found: ID does not exist" containerID="6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.271648 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539"} err="failed to get container status \"6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539\": rpc error: code = NotFound desc = could not find container \"6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539\": container with ID starting with 6f34a83edbb63b69e180b1fa31ab31b23863735dc4a48ef60ed16a2cf01df539 not found: ID does not exist" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.271681 4996 scope.go:117] "RemoveContainer" containerID="739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030" Feb 28 10:11:00 crc kubenswrapper[4996]: E0228 10:11:00.272076 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030\": container with ID starting with 739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030 not found: ID does not exist" containerID="739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.272108 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030"} err="failed to get container status \"739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030\": rpc error: code = NotFound desc = could not find container \"739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030\": container with ID starting with 739ca536322ae7baf22a21b41ae77fc72d40d9d713e4c9a436498d2d1729b030 not found: ID does not exist" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.272128 4996 scope.go:117] "RemoveContainer" containerID="4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8" Feb 28 10:11:00 crc kubenswrapper[4996]: E0228 10:11:00.272456 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8\": container with ID starting with 4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8 not found: ID does not exist" containerID="4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8" Feb 28 10:11:00 crc kubenswrapper[4996]: I0228 10:11:00.272503 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8"} err="failed to get container status \"4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8\": rpc error: code = NotFound desc = could not find container \"4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8\": container with ID starting with 4b0e3ec817a77742a4f3fe54e73e1f4cc59d3f86105b103db19f4482b8f2eff8 not found: ID does not exist" Feb 28 10:11:01 crc kubenswrapper[4996]: I0228 10:11:01.043588 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" path="/var/lib/kubelet/pods/cae1bad3-e70b-4b97-89e3-cd0afade820b/volumes" Feb 28 10:12:00 crc kubenswrapper[4996]: E0228 10:12:00.129380 4996 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.096s" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.200781 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537892-bsgpw"] Feb 28 10:12:00 crc kubenswrapper[4996]: E0228 10:12:00.201575 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="extract-content" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.201599 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="extract-content" Feb 28 10:12:00 crc kubenswrapper[4996]: E0228 10:12:00.201618 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerName="extract-content" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.201625 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerName="extract-content" Feb 28 10:12:00 crc kubenswrapper[4996]: E0228 10:12:00.201641 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="registry-server" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.201648 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="registry-server" Feb 28 10:12:00 crc kubenswrapper[4996]: E0228 10:12:00.201672 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerName="registry-server" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.201679 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerName="registry-server" Feb 28 10:12:00 crc kubenswrapper[4996]: E0228 10:12:00.201688 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerName="extract-utilities" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.201697 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerName="extract-utilities" Feb 28 10:12:00 crc kubenswrapper[4996]: E0228 10:12:00.201709 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="extract-utilities" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.201717 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="extract-utilities" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.201940 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c2df09-f11c-40e8-854d-97e8f9e58b9a" containerName="registry-server" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.201970 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae1bad3-e70b-4b97-89e3-cd0afade820b" containerName="registry-server" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.202728 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537892-bsgpw" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.207784 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.209298 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.209569 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.212487 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537892-bsgpw"] Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.330181 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cp9n\" (UniqueName: \"kubernetes.io/projected/406af3bb-77d6-42ed-a8fc-4cb0752edb50-kube-api-access-5cp9n\") pod \"auto-csr-approver-29537892-bsgpw\" (UID: \"406af3bb-77d6-42ed-a8fc-4cb0752edb50\") " pod="openshift-infra/auto-csr-approver-29537892-bsgpw" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.432880 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cp9n\" (UniqueName: \"kubernetes.io/projected/406af3bb-77d6-42ed-a8fc-4cb0752edb50-kube-api-access-5cp9n\") pod \"auto-csr-approver-29537892-bsgpw\" (UID: \"406af3bb-77d6-42ed-a8fc-4cb0752edb50\") " pod="openshift-infra/auto-csr-approver-29537892-bsgpw" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.460369 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cp9n\" (UniqueName: \"kubernetes.io/projected/406af3bb-77d6-42ed-a8fc-4cb0752edb50-kube-api-access-5cp9n\") pod \"auto-csr-approver-29537892-bsgpw\" (UID: \"406af3bb-77d6-42ed-a8fc-4cb0752edb50\") " pod="openshift-infra/auto-csr-approver-29537892-bsgpw" Feb 28 10:12:00 crc kubenswrapper[4996]: I0228 10:12:00.524221 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537892-bsgpw" Feb 28 10:12:01 crc kubenswrapper[4996]: I0228 10:12:01.068585 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537892-bsgpw"] Feb 28 10:12:01 crc kubenswrapper[4996]: W0228 10:12:01.507144 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod406af3bb_77d6_42ed_a8fc_4cb0752edb50.slice/crio-2041645a9ebe809c9627ccd8ff900fe312cbadf857b82edb15a16725c527bf74 WatchSource:0}: Error finding container 2041645a9ebe809c9627ccd8ff900fe312cbadf857b82edb15a16725c527bf74: Status 404 returned error can't find the container with id 2041645a9ebe809c9627ccd8ff900fe312cbadf857b82edb15a16725c527bf74 Feb 28 10:12:01 crc kubenswrapper[4996]: I0228 10:12:01.509879 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:12:02 crc kubenswrapper[4996]: I0228 10:12:02.171156 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537892-bsgpw" event={"ID":"406af3bb-77d6-42ed-a8fc-4cb0752edb50","Type":"ContainerStarted","Data":"2041645a9ebe809c9627ccd8ff900fe312cbadf857b82edb15a16725c527bf74"} Feb 28 10:12:03 crc kubenswrapper[4996]: I0228 10:12:03.181872 4996 generic.go:334] "Generic (PLEG): container finished" podID="406af3bb-77d6-42ed-a8fc-4cb0752edb50" containerID="1fce6cf7c3fac80b6faf8f0563c81a4388d4a0893648a5f18ec8d4d4fbc61537" exitCode=0 Feb 28 10:12:03 crc kubenswrapper[4996]: I0228 10:12:03.181978 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537892-bsgpw" event={"ID":"406af3bb-77d6-42ed-a8fc-4cb0752edb50","Type":"ContainerDied","Data":"1fce6cf7c3fac80b6faf8f0563c81a4388d4a0893648a5f18ec8d4d4fbc61537"} Feb 28 10:12:04 crc kubenswrapper[4996]: I0228 10:12:04.822695 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537892-bsgpw" Feb 28 10:12:04 crc kubenswrapper[4996]: I0228 10:12:04.918321 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cp9n\" (UniqueName: \"kubernetes.io/projected/406af3bb-77d6-42ed-a8fc-4cb0752edb50-kube-api-access-5cp9n\") pod \"406af3bb-77d6-42ed-a8fc-4cb0752edb50\" (UID: \"406af3bb-77d6-42ed-a8fc-4cb0752edb50\") " Feb 28 10:12:04 crc kubenswrapper[4996]: I0228 10:12:04.929110 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406af3bb-77d6-42ed-a8fc-4cb0752edb50-kube-api-access-5cp9n" (OuterVolumeSpecName: "kube-api-access-5cp9n") pod "406af3bb-77d6-42ed-a8fc-4cb0752edb50" (UID: "406af3bb-77d6-42ed-a8fc-4cb0752edb50"). InnerVolumeSpecName "kube-api-access-5cp9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:12:05 crc kubenswrapper[4996]: I0228 10:12:05.022076 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cp9n\" (UniqueName: \"kubernetes.io/projected/406af3bb-77d6-42ed-a8fc-4cb0752edb50-kube-api-access-5cp9n\") on node \"crc\" DevicePath \"\"" Feb 28 10:12:05 crc kubenswrapper[4996]: I0228 10:12:05.201174 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537892-bsgpw" event={"ID":"406af3bb-77d6-42ed-a8fc-4cb0752edb50","Type":"ContainerDied","Data":"2041645a9ebe809c9627ccd8ff900fe312cbadf857b82edb15a16725c527bf74"} Feb 28 10:12:05 crc kubenswrapper[4996]: I0228 10:12:05.201232 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2041645a9ebe809c9627ccd8ff900fe312cbadf857b82edb15a16725c527bf74" Feb 28 10:12:05 crc kubenswrapper[4996]: I0228 10:12:05.201240 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537892-bsgpw" Feb 28 10:12:05 crc kubenswrapper[4996]: I0228 10:12:05.916377 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537886-7bc7m"] Feb 28 10:12:05 crc kubenswrapper[4996]: I0228 10:12:05.924131 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537886-7bc7m"] Feb 28 10:12:07 crc kubenswrapper[4996]: I0228 10:12:07.049796 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9c725b1-c96f-40f0-9642-a4e5f2e2588a" path="/var/lib/kubelet/pods/b9c725b1-c96f-40f0-9642-a4e5f2e2588a/volumes" Feb 28 10:12:47 crc kubenswrapper[4996]: I0228 10:12:47.580195 4996 scope.go:117] "RemoveContainer" containerID="3bab77a86579597387ea66c0fb79bb7ddba8102656b4187a9f0cebb4f528d744" Feb 28 10:13:12 crc kubenswrapper[4996]: I0228 10:13:12.249036 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:13:12 crc kubenswrapper[4996]: I0228 10:13:12.249532 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:13:42 crc kubenswrapper[4996]: I0228 10:13:42.249310 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:13:42 crc kubenswrapper[4996]: I0228 10:13:42.249882 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.140785 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537894-6lwxt"] Feb 28 10:14:00 crc kubenswrapper[4996]: E0228 10:14:00.141678 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406af3bb-77d6-42ed-a8fc-4cb0752edb50" containerName="oc" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.141694 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="406af3bb-77d6-42ed-a8fc-4cb0752edb50" containerName="oc" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.141871 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="406af3bb-77d6-42ed-a8fc-4cb0752edb50" containerName="oc" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.142510 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537894-6lwxt" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.144174 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.144976 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.145367 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.188876 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537894-6lwxt"] Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.218937 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qvkh\" (UniqueName: \"kubernetes.io/projected/46ad8099-228f-41a1-9605-2bd8307cf75c-kube-api-access-7qvkh\") pod \"auto-csr-approver-29537894-6lwxt\" (UID: \"46ad8099-228f-41a1-9605-2bd8307cf75c\") " pod="openshift-infra/auto-csr-approver-29537894-6lwxt" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.321802 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qvkh\" (UniqueName: \"kubernetes.io/projected/46ad8099-228f-41a1-9605-2bd8307cf75c-kube-api-access-7qvkh\") pod \"auto-csr-approver-29537894-6lwxt\" (UID: \"46ad8099-228f-41a1-9605-2bd8307cf75c\") " pod="openshift-infra/auto-csr-approver-29537894-6lwxt" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.349188 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qvkh\" (UniqueName: \"kubernetes.io/projected/46ad8099-228f-41a1-9605-2bd8307cf75c-kube-api-access-7qvkh\") pod \"auto-csr-approver-29537894-6lwxt\" (UID: \"46ad8099-228f-41a1-9605-2bd8307cf75c\") " pod="openshift-infra/auto-csr-approver-29537894-6lwxt" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.498762 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537894-6lwxt" Feb 28 10:14:00 crc kubenswrapper[4996]: I0228 10:14:00.969179 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537894-6lwxt"] Feb 28 10:14:01 crc kubenswrapper[4996]: I0228 10:14:01.301552 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537894-6lwxt" event={"ID":"46ad8099-228f-41a1-9605-2bd8307cf75c","Type":"ContainerStarted","Data":"2d9012f4a6b0e70fc9f4e321e74f368982ac94e2d927716db05c9fed68313c0e"} Feb 28 10:14:03 crc kubenswrapper[4996]: I0228 10:14:03.323393 4996 generic.go:334] "Generic (PLEG): container finished" podID="46ad8099-228f-41a1-9605-2bd8307cf75c" containerID="6d8aa9c4b6c9bac9cdfb9b79d4465fce777e2f75b716a16a04f9ed04203f8b52" exitCode=0 Feb 28 10:14:03 crc kubenswrapper[4996]: I0228 10:14:03.323794 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537894-6lwxt" event={"ID":"46ad8099-228f-41a1-9605-2bd8307cf75c","Type":"ContainerDied","Data":"6d8aa9c4b6c9bac9cdfb9b79d4465fce777e2f75b716a16a04f9ed04203f8b52"} Feb 28 10:14:04 crc kubenswrapper[4996]: I0228 10:14:04.899852 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537894-6lwxt" Feb 28 10:14:05 crc kubenswrapper[4996]: I0228 10:14:05.015393 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qvkh\" (UniqueName: \"kubernetes.io/projected/46ad8099-228f-41a1-9605-2bd8307cf75c-kube-api-access-7qvkh\") pod \"46ad8099-228f-41a1-9605-2bd8307cf75c\" (UID: \"46ad8099-228f-41a1-9605-2bd8307cf75c\") " Feb 28 10:14:05 crc kubenswrapper[4996]: I0228 10:14:05.020611 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ad8099-228f-41a1-9605-2bd8307cf75c-kube-api-access-7qvkh" (OuterVolumeSpecName: "kube-api-access-7qvkh") pod "46ad8099-228f-41a1-9605-2bd8307cf75c" (UID: "46ad8099-228f-41a1-9605-2bd8307cf75c"). InnerVolumeSpecName "kube-api-access-7qvkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:14:05 crc kubenswrapper[4996]: I0228 10:14:05.118466 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qvkh\" (UniqueName: \"kubernetes.io/projected/46ad8099-228f-41a1-9605-2bd8307cf75c-kube-api-access-7qvkh\") on node \"crc\" DevicePath \"\"" Feb 28 10:14:05 crc kubenswrapper[4996]: I0228 10:14:05.344566 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537894-6lwxt" event={"ID":"46ad8099-228f-41a1-9605-2bd8307cf75c","Type":"ContainerDied","Data":"2d9012f4a6b0e70fc9f4e321e74f368982ac94e2d927716db05c9fed68313c0e"} Feb 28 10:14:05 crc kubenswrapper[4996]: I0228 10:14:05.344614 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d9012f4a6b0e70fc9f4e321e74f368982ac94e2d927716db05c9fed68313c0e" Feb 28 10:14:05 crc kubenswrapper[4996]: I0228 10:14:05.344678 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537894-6lwxt" Feb 28 10:14:05 crc kubenswrapper[4996]: I0228 10:14:05.984687 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537888-bnc6c"] Feb 28 10:14:05 crc kubenswrapper[4996]: I0228 10:14:05.995884 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537888-bnc6c"] Feb 28 10:14:07 crc kubenswrapper[4996]: I0228 10:14:07.043251 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8b77a9-41f4-4bed-b05e-d900efeeb16c" path="/var/lib/kubelet/pods/8b8b77a9-41f4-4bed-b05e-d900efeeb16c/volumes" Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.249237 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.249961 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.250077 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.251002 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.251138 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" gracePeriod=600 Feb 28 10:14:12 crc kubenswrapper[4996]: E0228 10:14:12.392537 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.414958 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" exitCode=0 Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.415042 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18"} Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.415090 4996 scope.go:117] "RemoveContainer" containerID="1ef5b5acc6b6fa0b0f0596617327a741863647364cd8a14e093baf55fb7c0c6e" Feb 28 10:14:12 crc kubenswrapper[4996]: I0228 10:14:12.415814 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:14:12 crc kubenswrapper[4996]: E0228 10:14:12.416161 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:14:27 crc kubenswrapper[4996]: I0228 10:14:27.033716 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:14:27 crc kubenswrapper[4996]: E0228 10:14:27.034412 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:14:40 crc kubenswrapper[4996]: I0228 10:14:40.033191 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:14:40 crc kubenswrapper[4996]: E0228 10:14:40.034385 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:14:47 crc kubenswrapper[4996]: I0228 10:14:47.674062 4996 scope.go:117] "RemoveContainer" containerID="875d1deaac5c5ea131f9ddd2ced96287883ad5c2ef3565b657b156023a9751da" Feb 28 10:14:54 crc kubenswrapper[4996]: I0228 10:14:54.033449 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:14:54 crc kubenswrapper[4996]: E0228 10:14:54.034877 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.197521 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx"] Feb 28 10:15:00 crc kubenswrapper[4996]: E0228 10:15:00.199303 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ad8099-228f-41a1-9605-2bd8307cf75c" containerName="oc" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.199329 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ad8099-228f-41a1-9605-2bd8307cf75c" containerName="oc" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.199664 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ad8099-228f-41a1-9605-2bd8307cf75c" containerName="oc" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.202434 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.207301 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.207369 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.212108 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx"] Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.265706 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bef7e16-096a-4e6e-98dc-77fea33afff9-secret-volume\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.265771 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bef7e16-096a-4e6e-98dc-77fea33afff9-config-volume\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.265831 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj696\" (UniqueName: \"kubernetes.io/projected/6bef7e16-096a-4e6e-98dc-77fea33afff9-kube-api-access-nj696\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.367573 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bef7e16-096a-4e6e-98dc-77fea33afff9-secret-volume\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.367647 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bef7e16-096a-4e6e-98dc-77fea33afff9-config-volume\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.367710 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj696\" (UniqueName: \"kubernetes.io/projected/6bef7e16-096a-4e6e-98dc-77fea33afff9-kube-api-access-nj696\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.370925 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bef7e16-096a-4e6e-98dc-77fea33afff9-config-volume\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.373843 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bef7e16-096a-4e6e-98dc-77fea33afff9-secret-volume\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.383802 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj696\" (UniqueName: \"kubernetes.io/projected/6bef7e16-096a-4e6e-98dc-77fea33afff9-kube-api-access-nj696\") pod \"collect-profiles-29537895-n6zvx\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.531048 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:00 crc kubenswrapper[4996]: I0228 10:15:00.990576 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx"] Feb 28 10:15:01 crc kubenswrapper[4996]: W0228 10:15:01.107734 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bef7e16_096a_4e6e_98dc_77fea33afff9.slice/crio-ee0a5b99f1f0813ac2217fb087c488afa1e424e10eed455d0a16b5a3724a7213 WatchSource:0}: Error finding container ee0a5b99f1f0813ac2217fb087c488afa1e424e10eed455d0a16b5a3724a7213: Status 404 returned error can't find the container with id ee0a5b99f1f0813ac2217fb087c488afa1e424e10eed455d0a16b5a3724a7213 Feb 28 10:15:01 crc kubenswrapper[4996]: I0228 10:15:01.888983 4996 generic.go:334] "Generic (PLEG): container finished" podID="6bef7e16-096a-4e6e-98dc-77fea33afff9" containerID="2b033874ec976c5b9edc5387276ef02a66da80d757b237549cb322f94ab1acf1" exitCode=0 Feb 28 10:15:01 crc kubenswrapper[4996]: I0228 10:15:01.889154 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" event={"ID":"6bef7e16-096a-4e6e-98dc-77fea33afff9","Type":"ContainerDied","Data":"2b033874ec976c5b9edc5387276ef02a66da80d757b237549cb322f94ab1acf1"} Feb 28 10:15:01 crc kubenswrapper[4996]: I0228 10:15:01.889522 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" event={"ID":"6bef7e16-096a-4e6e-98dc-77fea33afff9","Type":"ContainerStarted","Data":"ee0a5b99f1f0813ac2217fb087c488afa1e424e10eed455d0a16b5a3724a7213"} Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.492874 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.541428 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj696\" (UniqueName: \"kubernetes.io/projected/6bef7e16-096a-4e6e-98dc-77fea33afff9-kube-api-access-nj696\") pod \"6bef7e16-096a-4e6e-98dc-77fea33afff9\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.541578 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bef7e16-096a-4e6e-98dc-77fea33afff9-config-volume\") pod \"6bef7e16-096a-4e6e-98dc-77fea33afff9\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.541633 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bef7e16-096a-4e6e-98dc-77fea33afff9-secret-volume\") pod \"6bef7e16-096a-4e6e-98dc-77fea33afff9\" (UID: \"6bef7e16-096a-4e6e-98dc-77fea33afff9\") " Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.542695 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bef7e16-096a-4e6e-98dc-77fea33afff9-config-volume" (OuterVolumeSpecName: "config-volume") pod "6bef7e16-096a-4e6e-98dc-77fea33afff9" (UID: "6bef7e16-096a-4e6e-98dc-77fea33afff9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.549033 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bef7e16-096a-4e6e-98dc-77fea33afff9-kube-api-access-nj696" (OuterVolumeSpecName: "kube-api-access-nj696") pod "6bef7e16-096a-4e6e-98dc-77fea33afff9" (UID: "6bef7e16-096a-4e6e-98dc-77fea33afff9"). InnerVolumeSpecName "kube-api-access-nj696". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.549151 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bef7e16-096a-4e6e-98dc-77fea33afff9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6bef7e16-096a-4e6e-98dc-77fea33afff9" (UID: "6bef7e16-096a-4e6e-98dc-77fea33afff9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.644075 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6bef7e16-096a-4e6e-98dc-77fea33afff9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.644126 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj696\" (UniqueName: \"kubernetes.io/projected/6bef7e16-096a-4e6e-98dc-77fea33afff9-kube-api-access-nj696\") on node \"crc\" DevicePath \"\"" Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.644143 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bef7e16-096a-4e6e-98dc-77fea33afff9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.909169 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" event={"ID":"6bef7e16-096a-4e6e-98dc-77fea33afff9","Type":"ContainerDied","Data":"ee0a5b99f1f0813ac2217fb087c488afa1e424e10eed455d0a16b5a3724a7213"} Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.909236 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx" Feb 28 10:15:03 crc kubenswrapper[4996]: I0228 10:15:03.909244 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee0a5b99f1f0813ac2217fb087c488afa1e424e10eed455d0a16b5a3724a7213" Feb 28 10:15:04 crc kubenswrapper[4996]: I0228 10:15:04.602394 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl"] Feb 28 10:15:04 crc kubenswrapper[4996]: I0228 10:15:04.612921 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537850-s8bdl"] Feb 28 10:15:05 crc kubenswrapper[4996]: I0228 10:15:05.053264 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6dc4fe-51a8-4244-bbbb-18a4d1184814" path="/var/lib/kubelet/pods/5b6dc4fe-51a8-4244-bbbb-18a4d1184814/volumes" Feb 28 10:15:08 crc kubenswrapper[4996]: I0228 10:15:08.033297 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:15:08 crc kubenswrapper[4996]: E0228 10:15:08.033942 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:15:22 crc kubenswrapper[4996]: I0228 10:15:22.034581 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:15:22 crc kubenswrapper[4996]: E0228 10:15:22.037142 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:15:36 crc kubenswrapper[4996]: I0228 10:15:36.034351 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:15:36 crc kubenswrapper[4996]: E0228 10:15:36.035045 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:15:47 crc kubenswrapper[4996]: I0228 10:15:47.762101 4996 scope.go:117] "RemoveContainer" containerID="07e9ee6800423ae08488948360e9dc26b880e3482f4a81c5d0dcedcdb17c9e6e" Feb 28 10:15:49 crc kubenswrapper[4996]: I0228 10:15:49.060423 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:15:49 crc kubenswrapper[4996]: E0228 10:15:49.060873 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:15:56 crc kubenswrapper[4996]: I0228 10:15:56.913229 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-7kw64" podUID="e645411e-43c5-44dd-b06a-4340e026ef8f" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.139901 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537896-vwg9k"] Feb 28 10:16:00 crc kubenswrapper[4996]: E0228 10:16:00.140743 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bef7e16-096a-4e6e-98dc-77fea33afff9" containerName="collect-profiles" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.140756 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bef7e16-096a-4e6e-98dc-77fea33afff9" containerName="collect-profiles" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.140929 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bef7e16-096a-4e6e-98dc-77fea33afff9" containerName="collect-profiles" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.141610 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537896-vwg9k" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.143189 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.143376 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.144629 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.149524 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537896-vwg9k"] Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.286035 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkbr\" (UniqueName: \"kubernetes.io/projected/8b18f529-7cd0-4f0d-950a-b5056f2ca6ef-kube-api-access-rwkbr\") pod \"auto-csr-approver-29537896-vwg9k\" (UID: \"8b18f529-7cd0-4f0d-950a-b5056f2ca6ef\") " pod="openshift-infra/auto-csr-approver-29537896-vwg9k" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.387536 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwkbr\" (UniqueName: \"kubernetes.io/projected/8b18f529-7cd0-4f0d-950a-b5056f2ca6ef-kube-api-access-rwkbr\") pod \"auto-csr-approver-29537896-vwg9k\" (UID: \"8b18f529-7cd0-4f0d-950a-b5056f2ca6ef\") " pod="openshift-infra/auto-csr-approver-29537896-vwg9k" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.406887 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwkbr\" (UniqueName: \"kubernetes.io/projected/8b18f529-7cd0-4f0d-950a-b5056f2ca6ef-kube-api-access-rwkbr\") pod \"auto-csr-approver-29537896-vwg9k\" (UID: \"8b18f529-7cd0-4f0d-950a-b5056f2ca6ef\") " pod="openshift-infra/auto-csr-approver-29537896-vwg9k" Feb 28 10:16:00 crc kubenswrapper[4996]: I0228 10:16:00.459420 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537896-vwg9k" Feb 28 10:16:01 crc kubenswrapper[4996]: I0228 10:16:01.062820 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537896-vwg9k"] Feb 28 10:16:01 crc kubenswrapper[4996]: I0228 10:16:01.466145 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537896-vwg9k" event={"ID":"8b18f529-7cd0-4f0d-950a-b5056f2ca6ef","Type":"ContainerStarted","Data":"045b84b5f09eb9c7eaf098ceb39b5c6bfbee967693b4362b67d9ac841de9fd6d"} Feb 28 10:16:02 crc kubenswrapper[4996]: I0228 10:16:02.477401 4996 generic.go:334] "Generic (PLEG): container finished" podID="8b18f529-7cd0-4f0d-950a-b5056f2ca6ef" containerID="e7150f61e99a1b2105ee2db0c084d3b2226b0de2bfafd2b53d3ec90c0ebb0af3" exitCode=0 Feb 28 10:16:02 crc kubenswrapper[4996]: I0228 10:16:02.477453 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537896-vwg9k" event={"ID":"8b18f529-7cd0-4f0d-950a-b5056f2ca6ef","Type":"ContainerDied","Data":"e7150f61e99a1b2105ee2db0c084d3b2226b0de2bfafd2b53d3ec90c0ebb0af3"} Feb 28 10:16:03 crc kubenswrapper[4996]: I0228 10:16:03.033657 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:16:03 crc kubenswrapper[4996]: E0228 10:16:03.034574 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:16:03 crc kubenswrapper[4996]: I0228 10:16:03.974922 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537896-vwg9k" Feb 28 10:16:04 crc kubenswrapper[4996]: I0228 10:16:04.059944 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwkbr\" (UniqueName: \"kubernetes.io/projected/8b18f529-7cd0-4f0d-950a-b5056f2ca6ef-kube-api-access-rwkbr\") pod \"8b18f529-7cd0-4f0d-950a-b5056f2ca6ef\" (UID: \"8b18f529-7cd0-4f0d-950a-b5056f2ca6ef\") " Feb 28 10:16:04 crc kubenswrapper[4996]: I0228 10:16:04.066683 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b18f529-7cd0-4f0d-950a-b5056f2ca6ef-kube-api-access-rwkbr" (OuterVolumeSpecName: "kube-api-access-rwkbr") pod "8b18f529-7cd0-4f0d-950a-b5056f2ca6ef" (UID: "8b18f529-7cd0-4f0d-950a-b5056f2ca6ef"). InnerVolumeSpecName "kube-api-access-rwkbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:16:04 crc kubenswrapper[4996]: I0228 10:16:04.164154 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwkbr\" (UniqueName: \"kubernetes.io/projected/8b18f529-7cd0-4f0d-950a-b5056f2ca6ef-kube-api-access-rwkbr\") on node \"crc\" DevicePath \"\"" Feb 28 10:16:04 crc kubenswrapper[4996]: I0228 10:16:04.496234 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537896-vwg9k" event={"ID":"8b18f529-7cd0-4f0d-950a-b5056f2ca6ef","Type":"ContainerDied","Data":"045b84b5f09eb9c7eaf098ceb39b5c6bfbee967693b4362b67d9ac841de9fd6d"} Feb 28 10:16:04 crc kubenswrapper[4996]: I0228 10:16:04.496543 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="045b84b5f09eb9c7eaf098ceb39b5c6bfbee967693b4362b67d9ac841de9fd6d" Feb 28 10:16:04 crc kubenswrapper[4996]: I0228 10:16:04.496307 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537896-vwg9k" Feb 28 10:16:05 crc kubenswrapper[4996]: I0228 10:16:05.105139 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537890-k688m"] Feb 28 10:16:05 crc kubenswrapper[4996]: I0228 10:16:05.114994 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537890-k688m"] Feb 28 10:16:07 crc kubenswrapper[4996]: I0228 10:16:07.042826 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7eecd3-0cb7-4b09-8014-414531ec452c" path="/var/lib/kubelet/pods/9a7eecd3-0cb7-4b09-8014-414531ec452c/volumes" Feb 28 10:16:16 crc kubenswrapper[4996]: I0228 10:16:16.033210 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:16:16 crc kubenswrapper[4996]: E0228 10:16:16.034099 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:16:29 crc kubenswrapper[4996]: I0228 10:16:29.034186 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:16:29 crc kubenswrapper[4996]: E0228 10:16:29.035621 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:16:43 crc kubenswrapper[4996]: I0228 10:16:43.043234 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:16:43 crc kubenswrapper[4996]: E0228 10:16:43.044858 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:16:47 crc kubenswrapper[4996]: I0228 10:16:47.816696 4996 scope.go:117] "RemoveContainer" containerID="9deac47e7dedca0ac9be531482901965a34a83ee74e220381eba97c801111312" Feb 28 10:16:57 crc kubenswrapper[4996]: I0228 10:16:57.038805 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:16:57 crc kubenswrapper[4996]: E0228 10:16:57.039830 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:17:10 crc kubenswrapper[4996]: I0228 10:17:10.033970 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:17:10 crc kubenswrapper[4996]: E0228 10:17:10.034934 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:17:25 crc kubenswrapper[4996]: I0228 10:17:25.035540 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:17:25 crc kubenswrapper[4996]: E0228 10:17:25.036908 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:17:36 crc kubenswrapper[4996]: I0228 10:17:36.032904 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:17:36 crc kubenswrapper[4996]: E0228 10:17:36.033730 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:17:49 crc kubenswrapper[4996]: I0228 10:17:49.033414 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:17:49 crc kubenswrapper[4996]: E0228 10:17:49.034328 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.205816 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ntkt7"] Feb 28 10:17:58 crc kubenswrapper[4996]: E0228 10:17:58.206892 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b18f529-7cd0-4f0d-950a-b5056f2ca6ef" containerName="oc" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.206912 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b18f529-7cd0-4f0d-950a-b5056f2ca6ef" containerName="oc" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.207380 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b18f529-7cd0-4f0d-950a-b5056f2ca6ef" containerName="oc" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.208886 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.227919 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntkt7"] Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.247476 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-utilities\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.247547 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjvxt\" (UniqueName: \"kubernetes.io/projected/b2203af0-c8ad-49e6-b800-cbae8705a47d-kube-api-access-hjvxt\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.247645 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-catalog-content\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.349039 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-utilities\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.349103 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjvxt\" (UniqueName: \"kubernetes.io/projected/b2203af0-c8ad-49e6-b800-cbae8705a47d-kube-api-access-hjvxt\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.349138 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-catalog-content\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.349545 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-utilities\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.349577 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-catalog-content\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.368276 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjvxt\" (UniqueName: \"kubernetes.io/projected/b2203af0-c8ad-49e6-b800-cbae8705a47d-kube-api-access-hjvxt\") pod \"certified-operators-ntkt7\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:58 crc kubenswrapper[4996]: I0228 10:17:58.539630 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:17:59 crc kubenswrapper[4996]: I0228 10:17:59.107804 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ntkt7"] Feb 28 10:17:59 crc kubenswrapper[4996]: W0228 10:17:59.113908 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2203af0_c8ad_49e6_b800_cbae8705a47d.slice/crio-bd33b067a53ca2151e2247af91af7b8952e07f061bdd3672ab2124a1d45a1d8b WatchSource:0}: Error finding container bd33b067a53ca2151e2247af91af7b8952e07f061bdd3672ab2124a1d45a1d8b: Status 404 returned error can't find the container with id bd33b067a53ca2151e2247af91af7b8952e07f061bdd3672ab2124a1d45a1d8b Feb 28 10:17:59 crc kubenswrapper[4996]: I0228 10:17:59.577126 4996 generic.go:334] "Generic (PLEG): container finished" podID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerID="e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185" exitCode=0 Feb 28 10:17:59 crc kubenswrapper[4996]: I0228 10:17:59.577242 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntkt7" event={"ID":"b2203af0-c8ad-49e6-b800-cbae8705a47d","Type":"ContainerDied","Data":"e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185"} Feb 28 10:17:59 crc kubenswrapper[4996]: I0228 10:17:59.577669 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntkt7" event={"ID":"b2203af0-c8ad-49e6-b800-cbae8705a47d","Type":"ContainerStarted","Data":"bd33b067a53ca2151e2247af91af7b8952e07f061bdd3672ab2124a1d45a1d8b"} Feb 28 10:17:59 crc kubenswrapper[4996]: I0228 10:17:59.579687 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.144807 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537898-4rsff"] Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.154025 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537898-4rsff" Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.156608 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.156672 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.157942 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537898-4rsff"] Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.158866 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.198685 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tf4c\" (UniqueName: \"kubernetes.io/projected/35c95b2b-9629-4c2b-9099-46db63474148-kube-api-access-6tf4c\") pod \"auto-csr-approver-29537898-4rsff\" (UID: \"35c95b2b-9629-4c2b-9099-46db63474148\") " pod="openshift-infra/auto-csr-approver-29537898-4rsff" Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.301694 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tf4c\" (UniqueName: \"kubernetes.io/projected/35c95b2b-9629-4c2b-9099-46db63474148-kube-api-access-6tf4c\") pod \"auto-csr-approver-29537898-4rsff\" (UID: \"35c95b2b-9629-4c2b-9099-46db63474148\") " pod="openshift-infra/auto-csr-approver-29537898-4rsff" Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.340924 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tf4c\" (UniqueName: \"kubernetes.io/projected/35c95b2b-9629-4c2b-9099-46db63474148-kube-api-access-6tf4c\") pod \"auto-csr-approver-29537898-4rsff\" (UID: \"35c95b2b-9629-4c2b-9099-46db63474148\") " pod="openshift-infra/auto-csr-approver-29537898-4rsff" Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.512052 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537898-4rsff" Feb 28 10:18:00 crc kubenswrapper[4996]: I0228 10:18:00.589189 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntkt7" event={"ID":"b2203af0-c8ad-49e6-b800-cbae8705a47d","Type":"ContainerStarted","Data":"0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8"} Feb 28 10:18:01 crc kubenswrapper[4996]: I0228 10:18:01.012547 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537898-4rsff"] Feb 28 10:18:01 crc kubenswrapper[4996]: I0228 10:18:01.033109 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:18:01 crc kubenswrapper[4996]: E0228 10:18:01.033381 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:18:01 crc kubenswrapper[4996]: I0228 10:18:01.602149 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537898-4rsff" event={"ID":"35c95b2b-9629-4c2b-9099-46db63474148","Type":"ContainerStarted","Data":"1907226f9025cb830c7e028abbeb29781c11a041295453603c4ad70fa5d8fa57"} Feb 28 10:18:02 crc kubenswrapper[4996]: I0228 10:18:02.612827 4996 generic.go:334] "Generic (PLEG): container finished" podID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerID="0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8" exitCode=0 Feb 28 10:18:02 crc kubenswrapper[4996]: I0228 10:18:02.612880 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntkt7" event={"ID":"b2203af0-c8ad-49e6-b800-cbae8705a47d","Type":"ContainerDied","Data":"0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8"} Feb 28 10:18:02 crc kubenswrapper[4996]: I0228 10:18:02.614752 4996 generic.go:334] "Generic (PLEG): container finished" podID="35c95b2b-9629-4c2b-9099-46db63474148" containerID="90bf40b04206192db70461ee0ae373960dcfcb9945c5aca6c56fe74715f15cf5" exitCode=0 Feb 28 10:18:02 crc kubenswrapper[4996]: I0228 10:18:02.614782 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537898-4rsff" event={"ID":"35c95b2b-9629-4c2b-9099-46db63474148","Type":"ContainerDied","Data":"90bf40b04206192db70461ee0ae373960dcfcb9945c5aca6c56fe74715f15cf5"} Feb 28 10:18:03 crc kubenswrapper[4996]: I0228 10:18:03.627220 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntkt7" event={"ID":"b2203af0-c8ad-49e6-b800-cbae8705a47d","Type":"ContainerStarted","Data":"9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf"} Feb 28 10:18:03 crc kubenswrapper[4996]: I0228 10:18:03.648211 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ntkt7" podStartSLOduration=2.22042939 podStartE2EDuration="5.648190028s" podCreationTimestamp="2026-02-28 10:17:58 +0000 UTC" firstStartedPulling="2026-02-28 10:17:59.579430634 +0000 UTC m=+4643.270233445" lastFinishedPulling="2026-02-28 10:18:03.007191262 +0000 UTC m=+4646.697994083" observedRunningTime="2026-02-28 10:18:03.645085392 +0000 UTC m=+4647.335888213" watchObservedRunningTime="2026-02-28 10:18:03.648190028 +0000 UTC m=+4647.338992839" Feb 28 10:18:04 crc kubenswrapper[4996]: I0228 10:18:04.144507 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537898-4rsff" Feb 28 10:18:04 crc kubenswrapper[4996]: I0228 10:18:04.198766 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tf4c\" (UniqueName: \"kubernetes.io/projected/35c95b2b-9629-4c2b-9099-46db63474148-kube-api-access-6tf4c\") pod \"35c95b2b-9629-4c2b-9099-46db63474148\" (UID: \"35c95b2b-9629-4c2b-9099-46db63474148\") " Feb 28 10:18:04 crc kubenswrapper[4996]: I0228 10:18:04.207335 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c95b2b-9629-4c2b-9099-46db63474148-kube-api-access-6tf4c" (OuterVolumeSpecName: "kube-api-access-6tf4c") pod "35c95b2b-9629-4c2b-9099-46db63474148" (UID: "35c95b2b-9629-4c2b-9099-46db63474148"). InnerVolumeSpecName "kube-api-access-6tf4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:18:04 crc kubenswrapper[4996]: I0228 10:18:04.301356 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tf4c\" (UniqueName: \"kubernetes.io/projected/35c95b2b-9629-4c2b-9099-46db63474148-kube-api-access-6tf4c\") on node \"crc\" DevicePath \"\"" Feb 28 10:18:04 crc kubenswrapper[4996]: I0228 10:18:04.641608 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537898-4rsff" event={"ID":"35c95b2b-9629-4c2b-9099-46db63474148","Type":"ContainerDied","Data":"1907226f9025cb830c7e028abbeb29781c11a041295453603c4ad70fa5d8fa57"} Feb 28 10:18:04 crc kubenswrapper[4996]: I0228 10:18:04.641650 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1907226f9025cb830c7e028abbeb29781c11a041295453603c4ad70fa5d8fa57" Feb 28 10:18:04 crc kubenswrapper[4996]: I0228 10:18:04.641671 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537898-4rsff" Feb 28 10:18:05 crc kubenswrapper[4996]: I0228 10:18:05.219751 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537892-bsgpw"] Feb 28 10:18:05 crc kubenswrapper[4996]: I0228 10:18:05.232802 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537892-bsgpw"] Feb 28 10:18:07 crc kubenswrapper[4996]: I0228 10:18:07.045196 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406af3bb-77d6-42ed-a8fc-4cb0752edb50" path="/var/lib/kubelet/pods/406af3bb-77d6-42ed-a8fc-4cb0752edb50/volumes" Feb 28 10:18:08 crc kubenswrapper[4996]: I0228 10:18:08.540425 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:18:08 crc kubenswrapper[4996]: I0228 10:18:08.540803 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:18:08 crc kubenswrapper[4996]: I0228 10:18:08.641965 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:18:08 crc kubenswrapper[4996]: I0228 10:18:08.726968 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:18:08 crc kubenswrapper[4996]: I0228 10:18:08.913497 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntkt7"] Feb 28 10:18:10 crc kubenswrapper[4996]: I0228 10:18:10.688351 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ntkt7" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerName="registry-server" containerID="cri-o://9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf" gracePeriod=2 Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.321508 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.457596 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjvxt\" (UniqueName: \"kubernetes.io/projected/b2203af0-c8ad-49e6-b800-cbae8705a47d-kube-api-access-hjvxt\") pod \"b2203af0-c8ad-49e6-b800-cbae8705a47d\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.457673 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-catalog-content\") pod \"b2203af0-c8ad-49e6-b800-cbae8705a47d\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.457744 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-utilities\") pod \"b2203af0-c8ad-49e6-b800-cbae8705a47d\" (UID: \"b2203af0-c8ad-49e6-b800-cbae8705a47d\") " Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.459277 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-utilities" (OuterVolumeSpecName: "utilities") pod "b2203af0-c8ad-49e6-b800-cbae8705a47d" (UID: "b2203af0-c8ad-49e6-b800-cbae8705a47d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.463653 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2203af0-c8ad-49e6-b800-cbae8705a47d-kube-api-access-hjvxt" (OuterVolumeSpecName: "kube-api-access-hjvxt") pod "b2203af0-c8ad-49e6-b800-cbae8705a47d" (UID: "b2203af0-c8ad-49e6-b800-cbae8705a47d"). InnerVolumeSpecName "kube-api-access-hjvxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.525914 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2203af0-c8ad-49e6-b800-cbae8705a47d" (UID: "b2203af0-c8ad-49e6-b800-cbae8705a47d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.560480 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjvxt\" (UniqueName: \"kubernetes.io/projected/b2203af0-c8ad-49e6-b800-cbae8705a47d-kube-api-access-hjvxt\") on node \"crc\" DevicePath \"\"" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.560528 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.560542 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2203af0-c8ad-49e6-b800-cbae8705a47d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.702098 4996 generic.go:334] "Generic (PLEG): container finished" podID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerID="9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf" exitCode=0 Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.702149 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ntkt7" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.702147 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntkt7" event={"ID":"b2203af0-c8ad-49e6-b800-cbae8705a47d","Type":"ContainerDied","Data":"9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf"} Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.702361 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ntkt7" event={"ID":"b2203af0-c8ad-49e6-b800-cbae8705a47d","Type":"ContainerDied","Data":"bd33b067a53ca2151e2247af91af7b8952e07f061bdd3672ab2124a1d45a1d8b"} Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.702422 4996 scope.go:117] "RemoveContainer" containerID="9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.734621 4996 scope.go:117] "RemoveContainer" containerID="0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.740223 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ntkt7"] Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.752271 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ntkt7"] Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.763900 4996 scope.go:117] "RemoveContainer" containerID="e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.817923 4996 scope.go:117] "RemoveContainer" containerID="9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf" Feb 28 10:18:11 crc kubenswrapper[4996]: E0228 10:18:11.818599 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf\": container with ID starting with 9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf not found: ID does not exist" containerID="9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.818634 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf"} err="failed to get container status \"9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf\": rpc error: code = NotFound desc = could not find container \"9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf\": container with ID starting with 9794c858e19da77d84bca93ea3d6984e9f2451560e6f0dce8fbca8f1295bcdbf not found: ID does not exist" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.818662 4996 scope.go:117] "RemoveContainer" containerID="0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8" Feb 28 10:18:11 crc kubenswrapper[4996]: E0228 10:18:11.819608 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8\": container with ID starting with 0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8 not found: ID does not exist" containerID="0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.819649 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8"} err="failed to get container status \"0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8\": rpc error: code = NotFound desc = could not find container \"0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8\": container with ID starting with 0560818a6c4332e06263a8cdd06a3f08131057966be8af7be3cf9023423d56d8 not found: ID does not exist" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.819677 4996 scope.go:117] "RemoveContainer" containerID="e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185" Feb 28 10:18:11 crc kubenswrapper[4996]: E0228 10:18:11.820316 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185\": container with ID starting with e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185 not found: ID does not exist" containerID="e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185" Feb 28 10:18:11 crc kubenswrapper[4996]: I0228 10:18:11.820353 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185"} err="failed to get container status \"e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185\": rpc error: code = NotFound desc = could not find container \"e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185\": container with ID starting with e0ed885cb2a6a498c83ca0361940e0e4db7dc17d0d4c2083ec1b5c850815b185 not found: ID does not exist" Feb 28 10:18:13 crc kubenswrapper[4996]: I0228 10:18:13.042200 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" path="/var/lib/kubelet/pods/b2203af0-c8ad-49e6-b800-cbae8705a47d/volumes" Feb 28 10:18:14 crc kubenswrapper[4996]: I0228 10:18:14.032912 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:18:14 crc kubenswrapper[4996]: E0228 10:18:14.033295 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:18:25 crc kubenswrapper[4996]: I0228 10:18:25.033353 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:18:25 crc kubenswrapper[4996]: E0228 10:18:25.034328 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:18:37 crc kubenswrapper[4996]: I0228 10:18:37.044065 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:18:37 crc kubenswrapper[4996]: E0228 10:18:37.049164 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:18:47 crc kubenswrapper[4996]: I0228 10:18:47.897584 4996 scope.go:117] "RemoveContainer" containerID="1fce6cf7c3fac80b6faf8f0563c81a4388d4a0893648a5f18ec8d4d4fbc61537" Feb 28 10:18:50 crc kubenswrapper[4996]: I0228 10:18:50.033820 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:18:50 crc kubenswrapper[4996]: E0228 10:18:50.034649 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:19:04 crc kubenswrapper[4996]: I0228 10:19:04.034924 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:19:04 crc kubenswrapper[4996]: E0228 10:19:04.035831 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:19:16 crc kubenswrapper[4996]: I0228 10:19:16.033692 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:19:17 crc kubenswrapper[4996]: I0228 10:19:17.263646 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"b1696a9863350185339499531bf245025975b43b7e8a4d15687b35e82b19e0ea"} Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.087995 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gt5xv"] Feb 28 10:19:52 crc kubenswrapper[4996]: E0228 10:19:52.088875 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c95b2b-9629-4c2b-9099-46db63474148" containerName="oc" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.088891 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c95b2b-9629-4c2b-9099-46db63474148" containerName="oc" Feb 28 10:19:52 crc kubenswrapper[4996]: E0228 10:19:52.088909 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerName="extract-content" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.088918 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerName="extract-content" Feb 28 10:19:52 crc kubenswrapper[4996]: E0228 10:19:52.088937 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerName="registry-server" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.088947 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerName="registry-server" Feb 28 10:19:52 crc kubenswrapper[4996]: E0228 10:19:52.089399 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerName="extract-utilities" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.089414 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerName="extract-utilities" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.089679 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c95b2b-9629-4c2b-9099-46db63474148" containerName="oc" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.089694 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2203af0-c8ad-49e6-b800-cbae8705a47d" containerName="registry-server" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.091325 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.105085 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gt5xv"] Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.183451 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-catalog-content\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.183642 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-utilities\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.183685 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgvfs\" (UniqueName: \"kubernetes.io/projected/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-kube-api-access-rgvfs\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.285358 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgvfs\" (UniqueName: \"kubernetes.io/projected/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-kube-api-access-rgvfs\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.285487 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-catalog-content\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.285575 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-utilities\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.286128 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-utilities\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.286193 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-catalog-content\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.314918 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgvfs\" (UniqueName: \"kubernetes.io/projected/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-kube-api-access-rgvfs\") pod \"redhat-marketplace-gt5xv\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.413373 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:19:52 crc kubenswrapper[4996]: I0228 10:19:52.879897 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gt5xv"] Feb 28 10:19:52 crc kubenswrapper[4996]: W0228 10:19:52.882734 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87b7c3bb_f6cc_4459_9015_c1ef5c695b9d.slice/crio-7c5b920d7257318c3852f44c67b92a1bcea5be503f857bb0b70513a4c5870094 WatchSource:0}: Error finding container 7c5b920d7257318c3852f44c67b92a1bcea5be503f857bb0b70513a4c5870094: Status 404 returned error can't find the container with id 7c5b920d7257318c3852f44c67b92a1bcea5be503f857bb0b70513a4c5870094 Feb 28 10:19:53 crc kubenswrapper[4996]: I0228 10:19:53.588157 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gt5xv" event={"ID":"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d","Type":"ContainerStarted","Data":"7c5b920d7257318c3852f44c67b92a1bcea5be503f857bb0b70513a4c5870094"} Feb 28 10:19:54 crc kubenswrapper[4996]: I0228 10:19:54.600142 4996 generic.go:334] "Generic (PLEG): container finished" podID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerID="4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9" exitCode=0 Feb 28 10:19:54 crc kubenswrapper[4996]: I0228 10:19:54.600228 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gt5xv" event={"ID":"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d","Type":"ContainerDied","Data":"4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9"} Feb 28 10:19:56 crc kubenswrapper[4996]: I0228 10:19:56.618882 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gt5xv" event={"ID":"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d","Type":"ContainerStarted","Data":"08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09"} Feb 28 10:19:57 crc kubenswrapper[4996]: I0228 10:19:57.629703 4996 generic.go:334] "Generic (PLEG): container finished" podID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerID="08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09" exitCode=0 Feb 28 10:19:57 crc kubenswrapper[4996]: I0228 10:19:57.629756 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gt5xv" event={"ID":"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d","Type":"ContainerDied","Data":"08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09"} Feb 28 10:19:58 crc kubenswrapper[4996]: I0228 10:19:58.639541 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gt5xv" event={"ID":"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d","Type":"ContainerStarted","Data":"c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba"} Feb 28 10:19:58 crc kubenswrapper[4996]: I0228 10:19:58.665075 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gt5xv" podStartSLOduration=3.24547434 podStartE2EDuration="6.665051227s" podCreationTimestamp="2026-02-28 10:19:52 +0000 UTC" firstStartedPulling="2026-02-28 10:19:54.602475884 +0000 UTC m=+4758.293278705" lastFinishedPulling="2026-02-28 10:19:58.022052781 +0000 UTC m=+4761.712855592" observedRunningTime="2026-02-28 10:19:58.655096722 +0000 UTC m=+4762.345899543" watchObservedRunningTime="2026-02-28 10:19:58.665051227 +0000 UTC m=+4762.355854038" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.146484 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537900-vr7rw"] Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.148368 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537900-vr7rw" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.150617 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.151084 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.151091 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.157033 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537900-vr7rw"] Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.255775 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbp6p\" (UniqueName: \"kubernetes.io/projected/4670d73a-2948-4811-95ac-e3690b832e69-kube-api-access-lbp6p\") pod \"auto-csr-approver-29537900-vr7rw\" (UID: \"4670d73a-2948-4811-95ac-e3690b832e69\") " pod="openshift-infra/auto-csr-approver-29537900-vr7rw" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.357633 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbp6p\" (UniqueName: \"kubernetes.io/projected/4670d73a-2948-4811-95ac-e3690b832e69-kube-api-access-lbp6p\") pod \"auto-csr-approver-29537900-vr7rw\" (UID: \"4670d73a-2948-4811-95ac-e3690b832e69\") " pod="openshift-infra/auto-csr-approver-29537900-vr7rw" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.383842 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbp6p\" (UniqueName: \"kubernetes.io/projected/4670d73a-2948-4811-95ac-e3690b832e69-kube-api-access-lbp6p\") pod \"auto-csr-approver-29537900-vr7rw\" (UID: \"4670d73a-2948-4811-95ac-e3690b832e69\") " pod="openshift-infra/auto-csr-approver-29537900-vr7rw" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.468516 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537900-vr7rw" Feb 28 10:20:00 crc kubenswrapper[4996]: I0228 10:20:00.917557 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537900-vr7rw"] Feb 28 10:20:00 crc kubenswrapper[4996]: W0228 10:20:00.919217 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4670d73a_2948_4811_95ac_e3690b832e69.slice/crio-85efc5e3371c2fecc2c0ac54aab71c52a295793c27ea4f59263da777b2b7eeba WatchSource:0}: Error finding container 85efc5e3371c2fecc2c0ac54aab71c52a295793c27ea4f59263da777b2b7eeba: Status 404 returned error can't find the container with id 85efc5e3371c2fecc2c0ac54aab71c52a295793c27ea4f59263da777b2b7eeba Feb 28 10:20:01 crc kubenswrapper[4996]: I0228 10:20:01.688298 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537900-vr7rw" event={"ID":"4670d73a-2948-4811-95ac-e3690b832e69","Type":"ContainerStarted","Data":"85efc5e3371c2fecc2c0ac54aab71c52a295793c27ea4f59263da777b2b7eeba"} Feb 28 10:20:02 crc kubenswrapper[4996]: I0228 10:20:02.414447 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:20:02 crc kubenswrapper[4996]: I0228 10:20:02.414752 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:20:02 crc kubenswrapper[4996]: I0228 10:20:02.489587 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:20:02 crc kubenswrapper[4996]: I0228 10:20:02.701687 4996 generic.go:334] "Generic (PLEG): container finished" podID="4670d73a-2948-4811-95ac-e3690b832e69" containerID="7cdd7f4e983ee7efa15d74aa99c95e4cd378df3d2ed16740dc347acc7f3f4f03" exitCode=0 Feb 28 10:20:02 crc kubenswrapper[4996]: I0228 10:20:02.701768 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537900-vr7rw" event={"ID":"4670d73a-2948-4811-95ac-e3690b832e69","Type":"ContainerDied","Data":"7cdd7f4e983ee7efa15d74aa99c95e4cd378df3d2ed16740dc347acc7f3f4f03"} Feb 28 10:20:04 crc kubenswrapper[4996]: I0228 10:20:04.388069 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537900-vr7rw" Feb 28 10:20:04 crc kubenswrapper[4996]: I0228 10:20:04.444055 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbp6p\" (UniqueName: \"kubernetes.io/projected/4670d73a-2948-4811-95ac-e3690b832e69-kube-api-access-lbp6p\") pod \"4670d73a-2948-4811-95ac-e3690b832e69\" (UID: \"4670d73a-2948-4811-95ac-e3690b832e69\") " Feb 28 10:20:04 crc kubenswrapper[4996]: I0228 10:20:04.456656 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4670d73a-2948-4811-95ac-e3690b832e69-kube-api-access-lbp6p" (OuterVolumeSpecName: "kube-api-access-lbp6p") pod "4670d73a-2948-4811-95ac-e3690b832e69" (UID: "4670d73a-2948-4811-95ac-e3690b832e69"). InnerVolumeSpecName "kube-api-access-lbp6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:20:04 crc kubenswrapper[4996]: I0228 10:20:04.546184 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbp6p\" (UniqueName: \"kubernetes.io/projected/4670d73a-2948-4811-95ac-e3690b832e69-kube-api-access-lbp6p\") on node \"crc\" DevicePath \"\"" Feb 28 10:20:04 crc kubenswrapper[4996]: I0228 10:20:04.732509 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537900-vr7rw" event={"ID":"4670d73a-2948-4811-95ac-e3690b832e69","Type":"ContainerDied","Data":"85efc5e3371c2fecc2c0ac54aab71c52a295793c27ea4f59263da777b2b7eeba"} Feb 28 10:20:04 crc kubenswrapper[4996]: I0228 10:20:04.732550 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85efc5e3371c2fecc2c0ac54aab71c52a295793c27ea4f59263da777b2b7eeba" Feb 28 10:20:04 crc kubenswrapper[4996]: I0228 10:20:04.732560 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537900-vr7rw" Feb 28 10:20:05 crc kubenswrapper[4996]: I0228 10:20:05.457289 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537894-6lwxt"] Feb 28 10:20:05 crc kubenswrapper[4996]: I0228 10:20:05.467853 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537894-6lwxt"] Feb 28 10:20:07 crc kubenswrapper[4996]: I0228 10:20:07.042556 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ad8099-228f-41a1-9605-2bd8307cf75c" path="/var/lib/kubelet/pods/46ad8099-228f-41a1-9605-2bd8307cf75c/volumes" Feb 28 10:20:12 crc kubenswrapper[4996]: I0228 10:20:12.478705 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:20:12 crc kubenswrapper[4996]: I0228 10:20:12.535823 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gt5xv"] Feb 28 10:20:12 crc kubenswrapper[4996]: I0228 10:20:12.799617 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gt5xv" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerName="registry-server" containerID="cri-o://c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba" gracePeriod=2 Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.437700 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.531130 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-utilities\") pod \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.531307 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-catalog-content\") pod \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.531482 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgvfs\" (UniqueName: \"kubernetes.io/projected/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-kube-api-access-rgvfs\") pod \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\" (UID: \"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d\") " Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.533901 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-utilities" (OuterVolumeSpecName: "utilities") pod "87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" (UID: "87b7c3bb-f6cc-4459-9015-c1ef5c695b9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.539360 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-kube-api-access-rgvfs" (OuterVolumeSpecName: "kube-api-access-rgvfs") pod "87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" (UID: "87b7c3bb-f6cc-4459-9015-c1ef5c695b9d"). InnerVolumeSpecName "kube-api-access-rgvfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.558165 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" (UID: "87b7c3bb-f6cc-4459-9015-c1ef5c695b9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.634082 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgvfs\" (UniqueName: \"kubernetes.io/projected/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-kube-api-access-rgvfs\") on node \"crc\" DevicePath \"\"" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.634304 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.634364 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.809341 4996 generic.go:334] "Generic (PLEG): container finished" podID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerID="c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba" exitCode=0 Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.809378 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gt5xv" event={"ID":"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d","Type":"ContainerDied","Data":"c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba"} Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.809437 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gt5xv" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.809658 4996 scope.go:117] "RemoveContainer" containerID="c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.809644 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gt5xv" event={"ID":"87b7c3bb-f6cc-4459-9015-c1ef5c695b9d","Type":"ContainerDied","Data":"7c5b920d7257318c3852f44c67b92a1bcea5be503f857bb0b70513a4c5870094"} Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.835668 4996 scope.go:117] "RemoveContainer" containerID="08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.858272 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gt5xv"] Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.861666 4996 scope.go:117] "RemoveContainer" containerID="4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.866506 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gt5xv"] Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.907379 4996 scope.go:117] "RemoveContainer" containerID="c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba" Feb 28 10:20:13 crc kubenswrapper[4996]: E0228 10:20:13.908140 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba\": container with ID starting with c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba not found: ID does not exist" containerID="c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.908172 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba"} err="failed to get container status \"c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba\": rpc error: code = NotFound desc = could not find container \"c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba\": container with ID starting with c21ae1946172e8224910e07f8cbf2e95e5bd9e2f4c9646de7266ac66527d42ba not found: ID does not exist" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.908199 4996 scope.go:117] "RemoveContainer" containerID="08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09" Feb 28 10:20:13 crc kubenswrapper[4996]: E0228 10:20:13.908395 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09\": container with ID starting with 08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09 not found: ID does not exist" containerID="08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.908427 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09"} err="failed to get container status \"08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09\": rpc error: code = NotFound desc = could not find container \"08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09\": container with ID starting with 08e75f5f6ec1ceee9391ec1c6fac6fa691348a0f78b567c3ff115f98a8dc1d09 not found: ID does not exist" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.908446 4996 scope.go:117] "RemoveContainer" containerID="4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9" Feb 28 10:20:13 crc kubenswrapper[4996]: E0228 10:20:13.908660 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9\": container with ID starting with 4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9 not found: ID does not exist" containerID="4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9" Feb 28 10:20:13 crc kubenswrapper[4996]: I0228 10:20:13.908684 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9"} err="failed to get container status \"4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9\": rpc error: code = NotFound desc = could not find container \"4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9\": container with ID starting with 4684c85e6fdfe4919a927e2458c933aecae901b7bc36da66b213c21931ee0ab9 not found: ID does not exist" Feb 28 10:20:15 crc kubenswrapper[4996]: I0228 10:20:15.045554 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" path="/var/lib/kubelet/pods/87b7c3bb-f6cc-4459-9015-c1ef5c695b9d/volumes" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.289342 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-47gtb"] Feb 28 10:20:36 crc kubenswrapper[4996]: E0228 10:20:36.290353 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4670d73a-2948-4811-95ac-e3690b832e69" containerName="oc" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.290372 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4670d73a-2948-4811-95ac-e3690b832e69" containerName="oc" Feb 28 10:20:36 crc kubenswrapper[4996]: E0228 10:20:36.290413 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerName="extract-utilities" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.290423 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerName="extract-utilities" Feb 28 10:20:36 crc kubenswrapper[4996]: E0228 10:20:36.290447 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerName="extract-content" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.290456 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerName="extract-content" Feb 28 10:20:36 crc kubenswrapper[4996]: E0228 10:20:36.290475 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerName="registry-server" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.290483 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerName="registry-server" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.290716 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b7c3bb-f6cc-4459-9015-c1ef5c695b9d" containerName="registry-server" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.290740 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4670d73a-2948-4811-95ac-e3690b832e69" containerName="oc" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.292479 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.299647 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47gtb"] Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.402642 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-utilities\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.402849 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-catalog-content\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.402881 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4qw\" (UniqueName: \"kubernetes.io/projected/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-kube-api-access-lp4qw\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.505075 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-utilities\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.505246 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-catalog-content\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.505290 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp4qw\" (UniqueName: \"kubernetes.io/projected/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-kube-api-access-lp4qw\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.505701 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-utilities\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.505783 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-catalog-content\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.598928 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp4qw\" (UniqueName: \"kubernetes.io/projected/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-kube-api-access-lp4qw\") pod \"redhat-operators-47gtb\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:36 crc kubenswrapper[4996]: I0228 10:20:36.662606 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:37 crc kubenswrapper[4996]: I0228 10:20:37.132681 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-47gtb"] Feb 28 10:20:38 crc kubenswrapper[4996]: I0228 10:20:38.046346 4996 generic.go:334] "Generic (PLEG): container finished" podID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerID="b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7" exitCode=0 Feb 28 10:20:38 crc kubenswrapper[4996]: I0228 10:20:38.046857 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47gtb" event={"ID":"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41","Type":"ContainerDied","Data":"b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7"} Feb 28 10:20:38 crc kubenswrapper[4996]: I0228 10:20:38.046891 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47gtb" event={"ID":"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41","Type":"ContainerStarted","Data":"7421bfef7468d703ec8fc9a5c7de2aab10e7fcf31220c20b3444f0c514d8e7bd"} Feb 28 10:20:39 crc kubenswrapper[4996]: I0228 10:20:39.056711 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47gtb" event={"ID":"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41","Type":"ContainerStarted","Data":"12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68"} Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.104562 4996 generic.go:334] "Generic (PLEG): container finished" podID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerID="12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68" exitCode=0 Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.104622 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47gtb" event={"ID":"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41","Type":"ContainerDied","Data":"12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68"} Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.717066 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8gdp9"] Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.719455 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.755803 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gdp9"] Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.773402 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-catalog-content\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.773507 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4c2p\" (UniqueName: \"kubernetes.io/projected/4d7499c9-f42e-4de3-979b-fbd6006495be-kube-api-access-r4c2p\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.773543 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-utilities\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.875564 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-utilities\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.875739 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-catalog-content\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.875854 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4c2p\" (UniqueName: \"kubernetes.io/projected/4d7499c9-f42e-4de3-979b-fbd6006495be-kube-api-access-r4c2p\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.876790 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-utilities\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.877174 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-catalog-content\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:44 crc kubenswrapper[4996]: I0228 10:20:44.903989 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4c2p\" (UniqueName: \"kubernetes.io/projected/4d7499c9-f42e-4de3-979b-fbd6006495be-kube-api-access-r4c2p\") pod \"community-operators-8gdp9\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:45 crc kubenswrapper[4996]: I0228 10:20:45.051439 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:45 crc kubenswrapper[4996]: I0228 10:20:45.119589 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47gtb" event={"ID":"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41","Type":"ContainerStarted","Data":"b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff"} Feb 28 10:20:45 crc kubenswrapper[4996]: I0228 10:20:45.624131 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-47gtb" podStartSLOduration=2.866214953 podStartE2EDuration="9.624107932s" podCreationTimestamp="2026-02-28 10:20:36 +0000 UTC" firstStartedPulling="2026-02-28 10:20:38.054306785 +0000 UTC m=+4801.745109596" lastFinishedPulling="2026-02-28 10:20:44.812199764 +0000 UTC m=+4808.503002575" observedRunningTime="2026-02-28 10:20:45.147070079 +0000 UTC m=+4808.837872890" watchObservedRunningTime="2026-02-28 10:20:45.624107932 +0000 UTC m=+4809.314910753" Feb 28 10:20:45 crc kubenswrapper[4996]: W0228 10:20:45.630529 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d7499c9_f42e_4de3_979b_fbd6006495be.slice/crio-5c35a523c46f95b91d6c61cce027e7b33270377b41738b7c4be802c21e10735a WatchSource:0}: Error finding container 5c35a523c46f95b91d6c61cce027e7b33270377b41738b7c4be802c21e10735a: Status 404 returned error can't find the container with id 5c35a523c46f95b91d6c61cce027e7b33270377b41738b7c4be802c21e10735a Feb 28 10:20:45 crc kubenswrapper[4996]: I0228 10:20:45.632450 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8gdp9"] Feb 28 10:20:46 crc kubenswrapper[4996]: I0228 10:20:46.128958 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gdp9" event={"ID":"4d7499c9-f42e-4de3-979b-fbd6006495be","Type":"ContainerStarted","Data":"5c35a523c46f95b91d6c61cce027e7b33270377b41738b7c4be802c21e10735a"} Feb 28 10:20:46 crc kubenswrapper[4996]: I0228 10:20:46.663664 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:46 crc kubenswrapper[4996]: I0228 10:20:46.664138 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:20:47 crc kubenswrapper[4996]: I0228 10:20:47.138553 4996 generic.go:334] "Generic (PLEG): container finished" podID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerID="4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e" exitCode=0 Feb 28 10:20:47 crc kubenswrapper[4996]: I0228 10:20:47.138653 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gdp9" event={"ID":"4d7499c9-f42e-4de3-979b-fbd6006495be","Type":"ContainerDied","Data":"4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e"} Feb 28 10:20:47 crc kubenswrapper[4996]: I0228 10:20:47.737318 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-47gtb" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="registry-server" probeResult="failure" output=< Feb 28 10:20:47 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:20:47 crc kubenswrapper[4996]: > Feb 28 10:20:48 crc kubenswrapper[4996]: I0228 10:20:48.031206 4996 scope.go:117] "RemoveContainer" containerID="6d8aa9c4b6c9bac9cdfb9b79d4465fce777e2f75b716a16a04f9ed04203f8b52" Feb 28 10:20:49 crc kubenswrapper[4996]: I0228 10:20:49.159317 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gdp9" event={"ID":"4d7499c9-f42e-4de3-979b-fbd6006495be","Type":"ContainerStarted","Data":"ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc"} Feb 28 10:20:50 crc kubenswrapper[4996]: I0228 10:20:50.168963 4996 generic.go:334] "Generic (PLEG): container finished" podID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerID="ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc" exitCode=0 Feb 28 10:20:50 crc kubenswrapper[4996]: I0228 10:20:50.169304 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gdp9" event={"ID":"4d7499c9-f42e-4de3-979b-fbd6006495be","Type":"ContainerDied","Data":"ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc"} Feb 28 10:20:51 crc kubenswrapper[4996]: I0228 10:20:51.189922 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gdp9" event={"ID":"4d7499c9-f42e-4de3-979b-fbd6006495be","Type":"ContainerStarted","Data":"e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88"} Feb 28 10:20:51 crc kubenswrapper[4996]: I0228 10:20:51.219633 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8gdp9" podStartSLOduration=3.768424784 podStartE2EDuration="7.219617305s" podCreationTimestamp="2026-02-28 10:20:44 +0000 UTC" firstStartedPulling="2026-02-28 10:20:47.140647661 +0000 UTC m=+4810.831450492" lastFinishedPulling="2026-02-28 10:20:50.591840192 +0000 UTC m=+4814.282643013" observedRunningTime="2026-02-28 10:20:51.218578809 +0000 UTC m=+4814.909381630" watchObservedRunningTime="2026-02-28 10:20:51.219617305 +0000 UTC m=+4814.910420116" Feb 28 10:20:55 crc kubenswrapper[4996]: I0228 10:20:55.051624 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:55 crc kubenswrapper[4996]: I0228 10:20:55.052181 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:55 crc kubenswrapper[4996]: I0228 10:20:55.094980 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:55 crc kubenswrapper[4996]: I0228 10:20:55.289886 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:55 crc kubenswrapper[4996]: I0228 10:20:55.352706 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gdp9"] Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.239670 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8gdp9" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerName="registry-server" containerID="cri-o://e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88" gracePeriod=2 Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.705828 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-47gtb" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="registry-server" probeResult="failure" output=< Feb 28 10:20:57 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:20:57 crc kubenswrapper[4996]: > Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.867392 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.934545 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-catalog-content\") pod \"4d7499c9-f42e-4de3-979b-fbd6006495be\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.934715 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4c2p\" (UniqueName: \"kubernetes.io/projected/4d7499c9-f42e-4de3-979b-fbd6006495be-kube-api-access-r4c2p\") pod \"4d7499c9-f42e-4de3-979b-fbd6006495be\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.934846 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-utilities\") pod \"4d7499c9-f42e-4de3-979b-fbd6006495be\" (UID: \"4d7499c9-f42e-4de3-979b-fbd6006495be\") " Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.936094 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-utilities" (OuterVolumeSpecName: "utilities") pod "4d7499c9-f42e-4de3-979b-fbd6006495be" (UID: "4d7499c9-f42e-4de3-979b-fbd6006495be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.949351 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7499c9-f42e-4de3-979b-fbd6006495be-kube-api-access-r4c2p" (OuterVolumeSpecName: "kube-api-access-r4c2p") pod "4d7499c9-f42e-4de3-979b-fbd6006495be" (UID: "4d7499c9-f42e-4de3-979b-fbd6006495be"). InnerVolumeSpecName "kube-api-access-r4c2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:20:57 crc kubenswrapper[4996]: I0228 10:20:57.988594 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d7499c9-f42e-4de3-979b-fbd6006495be" (UID: "4d7499c9-f42e-4de3-979b-fbd6006495be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.037693 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.037724 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4c2p\" (UniqueName: \"kubernetes.io/projected/4d7499c9-f42e-4de3-979b-fbd6006495be-kube-api-access-r4c2p\") on node \"crc\" DevicePath \"\"" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.037739 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d7499c9-f42e-4de3-979b-fbd6006495be-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.249694 4996 generic.go:334] "Generic (PLEG): container finished" podID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerID="e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88" exitCode=0 Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.249760 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gdp9" event={"ID":"4d7499c9-f42e-4de3-979b-fbd6006495be","Type":"ContainerDied","Data":"e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88"} Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.249783 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8gdp9" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.249805 4996 scope.go:117] "RemoveContainer" containerID="e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.249789 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8gdp9" event={"ID":"4d7499c9-f42e-4de3-979b-fbd6006495be","Type":"ContainerDied","Data":"5c35a523c46f95b91d6c61cce027e7b33270377b41738b7c4be802c21e10735a"} Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.283113 4996 scope.go:117] "RemoveContainer" containerID="ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.304069 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8gdp9"] Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.314356 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8gdp9"] Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.321592 4996 scope.go:117] "RemoveContainer" containerID="4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.370985 4996 scope.go:117] "RemoveContainer" containerID="e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88" Feb 28 10:20:58 crc kubenswrapper[4996]: E0228 10:20:58.371482 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88\": container with ID starting with e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88 not found: ID does not exist" containerID="e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.371529 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88"} err="failed to get container status \"e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88\": rpc error: code = NotFound desc = could not find container \"e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88\": container with ID starting with e95707e9c64b91d2e7a866697525d360091a0b47ee18e379ab32e5ea978a8f88 not found: ID does not exist" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.371573 4996 scope.go:117] "RemoveContainer" containerID="ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc" Feb 28 10:20:58 crc kubenswrapper[4996]: E0228 10:20:58.371968 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc\": container with ID starting with ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc not found: ID does not exist" containerID="ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.372023 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc"} err="failed to get container status \"ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc\": rpc error: code = NotFound desc = could not find container \"ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc\": container with ID starting with ad21b7f083413ca0adc738205cc8b82c329a2a1e55651e91986afce74d50c2cc not found: ID does not exist" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.372054 4996 scope.go:117] "RemoveContainer" containerID="4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e" Feb 28 10:20:58 crc kubenswrapper[4996]: E0228 10:20:58.372329 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e\": container with ID starting with 4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e not found: ID does not exist" containerID="4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e" Feb 28 10:20:58 crc kubenswrapper[4996]: I0228 10:20:58.372365 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e"} err="failed to get container status \"4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e\": rpc error: code = NotFound desc = could not find container \"4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e\": container with ID starting with 4241dbf085600849fbd4c3e651d75e44f83cf33f642832669f4a01592ac5ba0e not found: ID does not exist" Feb 28 10:20:59 crc kubenswrapper[4996]: I0228 10:20:59.048132 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" path="/var/lib/kubelet/pods/4d7499c9-f42e-4de3-979b-fbd6006495be/volumes" Feb 28 10:21:07 crc kubenswrapper[4996]: I0228 10:21:07.715631 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-47gtb" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="registry-server" probeResult="failure" output=< Feb 28 10:21:07 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:21:07 crc kubenswrapper[4996]: > Feb 28 10:21:17 crc kubenswrapper[4996]: I0228 10:21:17.712407 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-47gtb" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="registry-server" probeResult="failure" output=< Feb 28 10:21:17 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:21:17 crc kubenswrapper[4996]: > Feb 28 10:21:26 crc kubenswrapper[4996]: I0228 10:21:26.716575 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:21:26 crc kubenswrapper[4996]: I0228 10:21:26.768685 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:21:26 crc kubenswrapper[4996]: I0228 10:21:26.963754 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-47gtb"] Feb 28 10:21:28 crc kubenswrapper[4996]: I0228 10:21:28.563575 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-47gtb" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="registry-server" containerID="cri-o://b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff" gracePeriod=2 Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.064281 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.105963 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-catalog-content\") pod \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.106135 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp4qw\" (UniqueName: \"kubernetes.io/projected/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-kube-api-access-lp4qw\") pod \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.106176 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-utilities\") pod \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\" (UID: \"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41\") " Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.108100 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-utilities" (OuterVolumeSpecName: "utilities") pod "1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" (UID: "1c91af26-1f71-4fd0-b5ad-c0dbae75ed41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.114155 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-kube-api-access-lp4qw" (OuterVolumeSpecName: "kube-api-access-lp4qw") pod "1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" (UID: "1c91af26-1f71-4fd0-b5ad-c0dbae75ed41"). InnerVolumeSpecName "kube-api-access-lp4qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.208109 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp4qw\" (UniqueName: \"kubernetes.io/projected/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-kube-api-access-lp4qw\") on node \"crc\" DevicePath \"\"" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.208143 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.250936 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" (UID: "1c91af26-1f71-4fd0-b5ad-c0dbae75ed41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.309859 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.575040 4996 generic.go:334] "Generic (PLEG): container finished" podID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerID="b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff" exitCode=0 Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.575093 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47gtb" event={"ID":"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41","Type":"ContainerDied","Data":"b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff"} Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.575120 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-47gtb" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.575135 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-47gtb" event={"ID":"1c91af26-1f71-4fd0-b5ad-c0dbae75ed41","Type":"ContainerDied","Data":"7421bfef7468d703ec8fc9a5c7de2aab10e7fcf31220c20b3444f0c514d8e7bd"} Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.575178 4996 scope.go:117] "RemoveContainer" containerID="b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.593947 4996 scope.go:117] "RemoveContainer" containerID="12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.616754 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-47gtb"] Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.624721 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-47gtb"] Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.629763 4996 scope.go:117] "RemoveContainer" containerID="b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.662117 4996 scope.go:117] "RemoveContainer" containerID="b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff" Feb 28 10:21:29 crc kubenswrapper[4996]: E0228 10:21:29.662543 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff\": container with ID starting with b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff not found: ID does not exist" containerID="b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.662575 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff"} err="failed to get container status \"b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff\": rpc error: code = NotFound desc = could not find container \"b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff\": container with ID starting with b7453dee57e5e6313dd6412946266dade372d2899f89c1be995dc811e48ec7ff not found: ID does not exist" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.662594 4996 scope.go:117] "RemoveContainer" containerID="12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68" Feb 28 10:21:29 crc kubenswrapper[4996]: E0228 10:21:29.662967 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68\": container with ID starting with 12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68 not found: ID does not exist" containerID="12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.663079 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68"} err="failed to get container status \"12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68\": rpc error: code = NotFound desc = could not find container \"12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68\": container with ID starting with 12c632812cc5429197cf8def17b59b66177659b8aac66b0b029a55f547371b68 not found: ID does not exist" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.663125 4996 scope.go:117] "RemoveContainer" containerID="b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7" Feb 28 10:21:29 crc kubenswrapper[4996]: E0228 10:21:29.663599 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7\": container with ID starting with b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7 not found: ID does not exist" containerID="b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7" Feb 28 10:21:29 crc kubenswrapper[4996]: I0228 10:21:29.663628 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7"} err="failed to get container status \"b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7\": rpc error: code = NotFound desc = could not find container \"b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7\": container with ID starting with b8bbb1c871fa28683ed5d135313fcea1b124a7d30e0ad3dbb3314e83c28ebae7 not found: ID does not exist" Feb 28 10:21:31 crc kubenswrapper[4996]: I0228 10:21:31.047963 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" path="/var/lib/kubelet/pods/1c91af26-1f71-4fd0-b5ad-c0dbae75ed41/volumes" Feb 28 10:21:42 crc kubenswrapper[4996]: I0228 10:21:42.249391 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:21:42 crc kubenswrapper[4996]: I0228 10:21:42.250108 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.142136 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537902-vxxgz"] Feb 28 10:22:00 crc kubenswrapper[4996]: E0228 10:22:00.143488 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerName="registry-server" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.143507 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerName="registry-server" Feb 28 10:22:00 crc kubenswrapper[4996]: E0228 10:22:00.143531 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="extract-utilities" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.143539 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="extract-utilities" Feb 28 10:22:00 crc kubenswrapper[4996]: E0228 10:22:00.143560 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerName="extract-content" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.143569 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerName="extract-content" Feb 28 10:22:00 crc kubenswrapper[4996]: E0228 10:22:00.143580 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerName="extract-utilities" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.143587 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerName="extract-utilities" Feb 28 10:22:00 crc kubenswrapper[4996]: E0228 10:22:00.143605 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="registry-server" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.143614 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="registry-server" Feb 28 10:22:00 crc kubenswrapper[4996]: E0228 10:22:00.143636 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="extract-content" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.143646 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="extract-content" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.143875 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7499c9-f42e-4de3-979b-fbd6006495be" containerName="registry-server" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.143908 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c91af26-1f71-4fd0-b5ad-c0dbae75ed41" containerName="registry-server" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.144775 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537902-vxxgz" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.147196 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.147306 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.148281 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.153141 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537902-vxxgz"] Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.265719 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfb8\" (UniqueName: \"kubernetes.io/projected/e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca-kube-api-access-zcfb8\") pod \"auto-csr-approver-29537902-vxxgz\" (UID: \"e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca\") " pod="openshift-infra/auto-csr-approver-29537902-vxxgz" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.367814 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfb8\" (UniqueName: \"kubernetes.io/projected/e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca-kube-api-access-zcfb8\") pod \"auto-csr-approver-29537902-vxxgz\" (UID: \"e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca\") " pod="openshift-infra/auto-csr-approver-29537902-vxxgz" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.389179 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfb8\" (UniqueName: \"kubernetes.io/projected/e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca-kube-api-access-zcfb8\") pod \"auto-csr-approver-29537902-vxxgz\" (UID: \"e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca\") " pod="openshift-infra/auto-csr-approver-29537902-vxxgz" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.471425 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537902-vxxgz" Feb 28 10:22:00 crc kubenswrapper[4996]: I0228 10:22:00.917250 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537902-vxxgz"] Feb 28 10:22:01 crc kubenswrapper[4996]: W0228 10:22:01.207572 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode245ab39_2af1_40eb_92c9_e5e8d5b3e9ca.slice/crio-c5297e6ec58581c82a227df1c360928a277214af7311041a9d0a8951a0cb0767 WatchSource:0}: Error finding container c5297e6ec58581c82a227df1c360928a277214af7311041a9d0a8951a0cb0767: Status 404 returned error can't find the container with id c5297e6ec58581c82a227df1c360928a277214af7311041a9d0a8951a0cb0767 Feb 28 10:22:01 crc kubenswrapper[4996]: I0228 10:22:01.902291 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537902-vxxgz" event={"ID":"e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca","Type":"ContainerStarted","Data":"c5297e6ec58581c82a227df1c360928a277214af7311041a9d0a8951a0cb0767"} Feb 28 10:22:02 crc kubenswrapper[4996]: I0228 10:22:02.910836 4996 generic.go:334] "Generic (PLEG): container finished" podID="e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca" containerID="0051d17da376fe82fb75a0d0e34d871003792584248711d417e24d9bb4fc0f9f" exitCode=0 Feb 28 10:22:02 crc kubenswrapper[4996]: I0228 10:22:02.910934 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537902-vxxgz" event={"ID":"e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca","Type":"ContainerDied","Data":"0051d17da376fe82fb75a0d0e34d871003792584248711d417e24d9bb4fc0f9f"} Feb 28 10:22:04 crc kubenswrapper[4996]: I0228 10:22:04.477336 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537902-vxxgz" Feb 28 10:22:04 crc kubenswrapper[4996]: I0228 10:22:04.550256 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcfb8\" (UniqueName: \"kubernetes.io/projected/e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca-kube-api-access-zcfb8\") pod \"e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca\" (UID: \"e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca\") " Feb 28 10:22:04 crc kubenswrapper[4996]: I0228 10:22:04.557993 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca-kube-api-access-zcfb8" (OuterVolumeSpecName: "kube-api-access-zcfb8") pod "e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca" (UID: "e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca"). InnerVolumeSpecName "kube-api-access-zcfb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:22:04 crc kubenswrapper[4996]: I0228 10:22:04.652544 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcfb8\" (UniqueName: \"kubernetes.io/projected/e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca-kube-api-access-zcfb8\") on node \"crc\" DevicePath \"\"" Feb 28 10:22:04 crc kubenswrapper[4996]: I0228 10:22:04.932114 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537902-vxxgz" event={"ID":"e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca","Type":"ContainerDied","Data":"c5297e6ec58581c82a227df1c360928a277214af7311041a9d0a8951a0cb0767"} Feb 28 10:22:04 crc kubenswrapper[4996]: I0228 10:22:04.932170 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5297e6ec58581c82a227df1c360928a277214af7311041a9d0a8951a0cb0767" Feb 28 10:22:04 crc kubenswrapper[4996]: I0228 10:22:04.932220 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537902-vxxgz" Feb 28 10:22:05 crc kubenswrapper[4996]: I0228 10:22:05.551500 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537896-vwg9k"] Feb 28 10:22:05 crc kubenswrapper[4996]: I0228 10:22:05.558315 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537896-vwg9k"] Feb 28 10:22:07 crc kubenswrapper[4996]: I0228 10:22:07.046272 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b18f529-7cd0-4f0d-950a-b5056f2ca6ef" path="/var/lib/kubelet/pods/8b18f529-7cd0-4f0d-950a-b5056f2ca6ef/volumes" Feb 28 10:22:12 crc kubenswrapper[4996]: I0228 10:22:12.248917 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:22:12 crc kubenswrapper[4996]: I0228 10:22:12.249572 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:22:42 crc kubenswrapper[4996]: I0228 10:22:42.248687 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:22:42 crc kubenswrapper[4996]: I0228 10:22:42.249370 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:22:42 crc kubenswrapper[4996]: I0228 10:22:42.249410 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:22:42 crc kubenswrapper[4996]: I0228 10:22:42.250110 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1696a9863350185339499531bf245025975b43b7e8a4d15687b35e82b19e0ea"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:22:42 crc kubenswrapper[4996]: I0228 10:22:42.250173 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://b1696a9863350185339499531bf245025975b43b7e8a4d15687b35e82b19e0ea" gracePeriod=600 Feb 28 10:22:43 crc kubenswrapper[4996]: I0228 10:22:43.289245 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="b1696a9863350185339499531bf245025975b43b7e8a4d15687b35e82b19e0ea" exitCode=0 Feb 28 10:22:43 crc kubenswrapper[4996]: I0228 10:22:43.289777 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"b1696a9863350185339499531bf245025975b43b7e8a4d15687b35e82b19e0ea"} Feb 28 10:22:43 crc kubenswrapper[4996]: I0228 10:22:43.289867 4996 scope.go:117] "RemoveContainer" containerID="00a0028a9fa9361e75c0989b04cb779cdc185e79ac96b27a84980c5405dc6a18" Feb 28 10:22:43 crc kubenswrapper[4996]: I0228 10:22:43.289813 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3"} Feb 28 10:22:48 crc kubenswrapper[4996]: I0228 10:22:48.232241 4996 scope.go:117] "RemoveContainer" containerID="e7150f61e99a1b2105ee2db0c084d3b2226b0de2bfafd2b53d3ec90c0ebb0af3" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.163874 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537904-dwdhl"] Feb 28 10:24:00 crc kubenswrapper[4996]: E0228 10:24:00.165553 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca" containerName="oc" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.165579 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca" containerName="oc" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.165782 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca" containerName="oc" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.166430 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537904-dwdhl" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.169061 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.169140 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.169356 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.209882 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537904-dwdhl"] Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.302324 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtn95\" (UniqueName: \"kubernetes.io/projected/df4bb141-ded6-4298-87fb-b470bf1993ed-kube-api-access-dtn95\") pod \"auto-csr-approver-29537904-dwdhl\" (UID: \"df4bb141-ded6-4298-87fb-b470bf1993ed\") " pod="openshift-infra/auto-csr-approver-29537904-dwdhl" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.404937 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtn95\" (UniqueName: \"kubernetes.io/projected/df4bb141-ded6-4298-87fb-b470bf1993ed-kube-api-access-dtn95\") pod \"auto-csr-approver-29537904-dwdhl\" (UID: \"df4bb141-ded6-4298-87fb-b470bf1993ed\") " pod="openshift-infra/auto-csr-approver-29537904-dwdhl" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.435054 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtn95\" (UniqueName: \"kubernetes.io/projected/df4bb141-ded6-4298-87fb-b470bf1993ed-kube-api-access-dtn95\") pod \"auto-csr-approver-29537904-dwdhl\" (UID: \"df4bb141-ded6-4298-87fb-b470bf1993ed\") " pod="openshift-infra/auto-csr-approver-29537904-dwdhl" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.513706 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537904-dwdhl" Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.988143 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537904-dwdhl"] Feb 28 10:24:00 crc kubenswrapper[4996]: I0228 10:24:00.988687 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:24:01 crc kubenswrapper[4996]: I0228 10:24:01.964058 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537904-dwdhl" event={"ID":"df4bb141-ded6-4298-87fb-b470bf1993ed","Type":"ContainerStarted","Data":"b4a0d8fcfed97b784e2b999b7fcc03d2adad74f273fa1dc2ca67ce3df308a117"} Feb 28 10:24:02 crc kubenswrapper[4996]: I0228 10:24:02.978925 4996 generic.go:334] "Generic (PLEG): container finished" podID="df4bb141-ded6-4298-87fb-b470bf1993ed" containerID="e41dba95c1c2a19ca502be56fe8e6d4c99d6d23ef8965c5ce43ca7027c6a32a8" exitCode=0 Feb 28 10:24:02 crc kubenswrapper[4996]: I0228 10:24:02.979052 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537904-dwdhl" event={"ID":"df4bb141-ded6-4298-87fb-b470bf1993ed","Type":"ContainerDied","Data":"e41dba95c1c2a19ca502be56fe8e6d4c99d6d23ef8965c5ce43ca7027c6a32a8"} Feb 28 10:24:04 crc kubenswrapper[4996]: I0228 10:24:04.578029 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537904-dwdhl" Feb 28 10:24:04 crc kubenswrapper[4996]: I0228 10:24:04.695523 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtn95\" (UniqueName: \"kubernetes.io/projected/df4bb141-ded6-4298-87fb-b470bf1993ed-kube-api-access-dtn95\") pod \"df4bb141-ded6-4298-87fb-b470bf1993ed\" (UID: \"df4bb141-ded6-4298-87fb-b470bf1993ed\") " Feb 28 10:24:04 crc kubenswrapper[4996]: I0228 10:24:04.701262 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df4bb141-ded6-4298-87fb-b470bf1993ed-kube-api-access-dtn95" (OuterVolumeSpecName: "kube-api-access-dtn95") pod "df4bb141-ded6-4298-87fb-b470bf1993ed" (UID: "df4bb141-ded6-4298-87fb-b470bf1993ed"). InnerVolumeSpecName "kube-api-access-dtn95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:24:04 crc kubenswrapper[4996]: I0228 10:24:04.797686 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtn95\" (UniqueName: \"kubernetes.io/projected/df4bb141-ded6-4298-87fb-b470bf1993ed-kube-api-access-dtn95\") on node \"crc\" DevicePath \"\"" Feb 28 10:24:05 crc kubenswrapper[4996]: I0228 10:24:05.001990 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537904-dwdhl" event={"ID":"df4bb141-ded6-4298-87fb-b470bf1993ed","Type":"ContainerDied","Data":"b4a0d8fcfed97b784e2b999b7fcc03d2adad74f273fa1dc2ca67ce3df308a117"} Feb 28 10:24:05 crc kubenswrapper[4996]: I0228 10:24:05.002074 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a0d8fcfed97b784e2b999b7fcc03d2adad74f273fa1dc2ca67ce3df308a117" Feb 28 10:24:05 crc kubenswrapper[4996]: I0228 10:24:05.002121 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537904-dwdhl" Feb 28 10:24:05 crc kubenswrapper[4996]: I0228 10:24:05.653767 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537898-4rsff"] Feb 28 10:24:05 crc kubenswrapper[4996]: I0228 10:24:05.662845 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537898-4rsff"] Feb 28 10:24:07 crc kubenswrapper[4996]: I0228 10:24:07.045440 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c95b2b-9629-4c2b-9099-46db63474148" path="/var/lib/kubelet/pods/35c95b2b-9629-4c2b-9099-46db63474148/volumes" Feb 28 10:24:42 crc kubenswrapper[4996]: I0228 10:24:42.249104 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:24:42 crc kubenswrapper[4996]: I0228 10:24:42.249854 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:24:48 crc kubenswrapper[4996]: I0228 10:24:48.353086 4996 scope.go:117] "RemoveContainer" containerID="90bf40b04206192db70461ee0ae373960dcfcb9945c5aca6c56fe74715f15cf5" Feb 28 10:25:12 crc kubenswrapper[4996]: I0228 10:25:12.248935 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:25:12 crc kubenswrapper[4996]: I0228 10:25:12.249497 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:25:42 crc kubenswrapper[4996]: I0228 10:25:42.249080 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:25:42 crc kubenswrapper[4996]: I0228 10:25:42.249613 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:25:42 crc kubenswrapper[4996]: I0228 10:25:42.249673 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:25:42 crc kubenswrapper[4996]: I0228 10:25:42.250510 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:25:42 crc kubenswrapper[4996]: I0228 10:25:42.250556 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" gracePeriod=600 Feb 28 10:25:42 crc kubenswrapper[4996]: E0228 10:25:42.373667 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:25:43 crc kubenswrapper[4996]: I0228 10:25:43.002471 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" exitCode=0 Feb 28 10:25:43 crc kubenswrapper[4996]: I0228 10:25:43.002585 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3"} Feb 28 10:25:43 crc kubenswrapper[4996]: I0228 10:25:43.002680 4996 scope.go:117] "RemoveContainer" containerID="b1696a9863350185339499531bf245025975b43b7e8a4d15687b35e82b19e0ea" Feb 28 10:25:43 crc kubenswrapper[4996]: I0228 10:25:43.003477 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:25:43 crc kubenswrapper[4996]: E0228 10:25:43.003739 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:25:58 crc kubenswrapper[4996]: I0228 10:25:58.033406 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:25:58 crc kubenswrapper[4996]: E0228 10:25:58.034349 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.144722 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537906-7f5rs"] Feb 28 10:26:00 crc kubenswrapper[4996]: E0228 10:26:00.145443 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df4bb141-ded6-4298-87fb-b470bf1993ed" containerName="oc" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.145458 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="df4bb141-ded6-4298-87fb-b470bf1993ed" containerName="oc" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.145695 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="df4bb141-ded6-4298-87fb-b470bf1993ed" containerName="oc" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.146490 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.148943 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.149047 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.149217 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.155863 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537906-7f5rs"] Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.226923 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlqfn\" (UniqueName: \"kubernetes.io/projected/a609084e-320b-42fe-85b0-40cc965ed629-kube-api-access-jlqfn\") pod \"auto-csr-approver-29537906-7f5rs\" (UID: \"a609084e-320b-42fe-85b0-40cc965ed629\") " pod="openshift-infra/auto-csr-approver-29537906-7f5rs" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.329589 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlqfn\" (UniqueName: \"kubernetes.io/projected/a609084e-320b-42fe-85b0-40cc965ed629-kube-api-access-jlqfn\") pod \"auto-csr-approver-29537906-7f5rs\" (UID: \"a609084e-320b-42fe-85b0-40cc965ed629\") " pod="openshift-infra/auto-csr-approver-29537906-7f5rs" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.353306 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlqfn\" (UniqueName: \"kubernetes.io/projected/a609084e-320b-42fe-85b0-40cc965ed629-kube-api-access-jlqfn\") pod \"auto-csr-approver-29537906-7f5rs\" (UID: \"a609084e-320b-42fe-85b0-40cc965ed629\") " pod="openshift-infra/auto-csr-approver-29537906-7f5rs" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.463944 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" Feb 28 10:26:00 crc kubenswrapper[4996]: I0228 10:26:00.914875 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537906-7f5rs"] Feb 28 10:26:01 crc kubenswrapper[4996]: I0228 10:26:01.169528 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" event={"ID":"a609084e-320b-42fe-85b0-40cc965ed629","Type":"ContainerStarted","Data":"8027f97d2ceebbc253516df485d79481cf1f8da03dc4d57ecaa8c366f3e07bd2"} Feb 28 10:26:02 crc kubenswrapper[4996]: I0228 10:26:02.183406 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" event={"ID":"a609084e-320b-42fe-85b0-40cc965ed629","Type":"ContainerStarted","Data":"a5d7f84be8df1a89fb7aebe4312d91094547a55dcdefb2d4406dfe1a0c3da4ff"} Feb 28 10:26:02 crc kubenswrapper[4996]: I0228 10:26:02.197609 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" podStartSLOduration=1.249607281 podStartE2EDuration="2.197577259s" podCreationTimestamp="2026-02-28 10:26:00 +0000 UTC" firstStartedPulling="2026-02-28 10:26:00.916758065 +0000 UTC m=+5124.607560876" lastFinishedPulling="2026-02-28 10:26:01.864728033 +0000 UTC m=+5125.555530854" observedRunningTime="2026-02-28 10:26:02.196732258 +0000 UTC m=+5125.887535069" watchObservedRunningTime="2026-02-28 10:26:02.197577259 +0000 UTC m=+5125.888380150" Feb 28 10:26:03 crc kubenswrapper[4996]: I0228 10:26:03.192771 4996 generic.go:334] "Generic (PLEG): container finished" podID="a609084e-320b-42fe-85b0-40cc965ed629" containerID="a5d7f84be8df1a89fb7aebe4312d91094547a55dcdefb2d4406dfe1a0c3da4ff" exitCode=0 Feb 28 10:26:03 crc kubenswrapper[4996]: I0228 10:26:03.192982 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" event={"ID":"a609084e-320b-42fe-85b0-40cc965ed629","Type":"ContainerDied","Data":"a5d7f84be8df1a89fb7aebe4312d91094547a55dcdefb2d4406dfe1a0c3da4ff"} Feb 28 10:26:04 crc kubenswrapper[4996]: I0228 10:26:04.686986 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" Feb 28 10:26:04 crc kubenswrapper[4996]: I0228 10:26:04.825132 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlqfn\" (UniqueName: \"kubernetes.io/projected/a609084e-320b-42fe-85b0-40cc965ed629-kube-api-access-jlqfn\") pod \"a609084e-320b-42fe-85b0-40cc965ed629\" (UID: \"a609084e-320b-42fe-85b0-40cc965ed629\") " Feb 28 10:26:04 crc kubenswrapper[4996]: I0228 10:26:04.837270 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a609084e-320b-42fe-85b0-40cc965ed629-kube-api-access-jlqfn" (OuterVolumeSpecName: "kube-api-access-jlqfn") pod "a609084e-320b-42fe-85b0-40cc965ed629" (UID: "a609084e-320b-42fe-85b0-40cc965ed629"). InnerVolumeSpecName "kube-api-access-jlqfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:26:04 crc kubenswrapper[4996]: I0228 10:26:04.927996 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlqfn\" (UniqueName: \"kubernetes.io/projected/a609084e-320b-42fe-85b0-40cc965ed629-kube-api-access-jlqfn\") on node \"crc\" DevicePath \"\"" Feb 28 10:26:05 crc kubenswrapper[4996]: I0228 10:26:05.218519 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" event={"ID":"a609084e-320b-42fe-85b0-40cc965ed629","Type":"ContainerDied","Data":"8027f97d2ceebbc253516df485d79481cf1f8da03dc4d57ecaa8c366f3e07bd2"} Feb 28 10:26:05 crc kubenswrapper[4996]: I0228 10:26:05.218571 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8027f97d2ceebbc253516df485d79481cf1f8da03dc4d57ecaa8c366f3e07bd2" Feb 28 10:26:05 crc kubenswrapper[4996]: I0228 10:26:05.218655 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537906-7f5rs" Feb 28 10:26:05 crc kubenswrapper[4996]: I0228 10:26:05.264518 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537900-vr7rw"] Feb 28 10:26:05 crc kubenswrapper[4996]: I0228 10:26:05.273265 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537900-vr7rw"] Feb 28 10:26:07 crc kubenswrapper[4996]: I0228 10:26:07.044264 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4670d73a-2948-4811-95ac-e3690b832e69" path="/var/lib/kubelet/pods/4670d73a-2948-4811-95ac-e3690b832e69/volumes" Feb 28 10:26:13 crc kubenswrapper[4996]: I0228 10:26:13.033597 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:26:13 crc kubenswrapper[4996]: E0228 10:26:13.034485 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:26:24 crc kubenswrapper[4996]: I0228 10:26:24.033438 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:26:24 crc kubenswrapper[4996]: E0228 10:26:24.034289 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:26:38 crc kubenswrapper[4996]: I0228 10:26:38.033538 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:26:38 crc kubenswrapper[4996]: E0228 10:26:38.034494 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:26:48 crc kubenswrapper[4996]: I0228 10:26:48.501113 4996 scope.go:117] "RemoveContainer" containerID="7cdd7f4e983ee7efa15d74aa99c95e4cd378df3d2ed16740dc347acc7f3f4f03" Feb 28 10:26:51 crc kubenswrapper[4996]: I0228 10:26:51.033842 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:26:51 crc kubenswrapper[4996]: E0228 10:26:51.034595 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:27:03 crc kubenswrapper[4996]: I0228 10:27:03.033802 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:27:03 crc kubenswrapper[4996]: E0228 10:27:03.035578 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:27:17 crc kubenswrapper[4996]: I0228 10:27:17.045206 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:27:17 crc kubenswrapper[4996]: E0228 10:27:17.045872 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:27:31 crc kubenswrapper[4996]: I0228 10:27:31.033712 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:27:31 crc kubenswrapper[4996]: E0228 10:27:31.034657 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:27:42 crc kubenswrapper[4996]: I0228 10:27:42.033754 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:27:42 crc kubenswrapper[4996]: E0228 10:27:42.035510 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:27:56 crc kubenswrapper[4996]: I0228 10:27:56.032827 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:27:56 crc kubenswrapper[4996]: E0228 10:27:56.033545 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.164753 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537908-pkgtv"] Feb 28 10:28:00 crc kubenswrapper[4996]: E0228 10:28:00.167096 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a609084e-320b-42fe-85b0-40cc965ed629" containerName="oc" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.167222 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a609084e-320b-42fe-85b0-40cc965ed629" containerName="oc" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.167558 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a609084e-320b-42fe-85b0-40cc965ed629" containerName="oc" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.168471 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537908-pkgtv" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.172131 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.172606 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.172242 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.178771 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537908-pkgtv"] Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.294975 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvfbs\" (UniqueName: \"kubernetes.io/projected/9a35f961-f608-48bd-a926-041c44788c9f-kube-api-access-jvfbs\") pod \"auto-csr-approver-29537908-pkgtv\" (UID: \"9a35f961-f608-48bd-a926-041c44788c9f\") " pod="openshift-infra/auto-csr-approver-29537908-pkgtv" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.397925 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvfbs\" (UniqueName: \"kubernetes.io/projected/9a35f961-f608-48bd-a926-041c44788c9f-kube-api-access-jvfbs\") pod \"auto-csr-approver-29537908-pkgtv\" (UID: \"9a35f961-f608-48bd-a926-041c44788c9f\") " pod="openshift-infra/auto-csr-approver-29537908-pkgtv" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.439832 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvfbs\" (UniqueName: \"kubernetes.io/projected/9a35f961-f608-48bd-a926-041c44788c9f-kube-api-access-jvfbs\") pod \"auto-csr-approver-29537908-pkgtv\" (UID: \"9a35f961-f608-48bd-a926-041c44788c9f\") " pod="openshift-infra/auto-csr-approver-29537908-pkgtv" Feb 28 10:28:00 crc kubenswrapper[4996]: I0228 10:28:00.499337 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537908-pkgtv" Feb 28 10:28:01 crc kubenswrapper[4996]: I0228 10:28:01.002864 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537908-pkgtv"] Feb 28 10:28:01 crc kubenswrapper[4996]: I0228 10:28:01.347171 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537908-pkgtv" event={"ID":"9a35f961-f608-48bd-a926-041c44788c9f","Type":"ContainerStarted","Data":"16f274ac8d7d1677b6a028e1c9537636b7cdcbd3f3485733004cdb546b90faff"} Feb 28 10:28:02 crc kubenswrapper[4996]: I0228 10:28:02.357342 4996 generic.go:334] "Generic (PLEG): container finished" podID="9a35f961-f608-48bd-a926-041c44788c9f" containerID="a3ee880a636a4c131e07116a4d7cd929a9368c174ba8118e1c98f3b902db24e0" exitCode=0 Feb 28 10:28:02 crc kubenswrapper[4996]: I0228 10:28:02.357448 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537908-pkgtv" event={"ID":"9a35f961-f608-48bd-a926-041c44788c9f","Type":"ContainerDied","Data":"a3ee880a636a4c131e07116a4d7cd929a9368c174ba8118e1c98f3b902db24e0"} Feb 28 10:28:03 crc kubenswrapper[4996]: I0228 10:28:03.848756 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537908-pkgtv" Feb 28 10:28:03 crc kubenswrapper[4996]: I0228 10:28:03.974808 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvfbs\" (UniqueName: \"kubernetes.io/projected/9a35f961-f608-48bd-a926-041c44788c9f-kube-api-access-jvfbs\") pod \"9a35f961-f608-48bd-a926-041c44788c9f\" (UID: \"9a35f961-f608-48bd-a926-041c44788c9f\") " Feb 28 10:28:03 crc kubenswrapper[4996]: I0228 10:28:03.982288 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a35f961-f608-48bd-a926-041c44788c9f-kube-api-access-jvfbs" (OuterVolumeSpecName: "kube-api-access-jvfbs") pod "9a35f961-f608-48bd-a926-041c44788c9f" (UID: "9a35f961-f608-48bd-a926-041c44788c9f"). InnerVolumeSpecName "kube-api-access-jvfbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:28:04 crc kubenswrapper[4996]: I0228 10:28:04.076824 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvfbs\" (UniqueName: \"kubernetes.io/projected/9a35f961-f608-48bd-a926-041c44788c9f-kube-api-access-jvfbs\") on node \"crc\" DevicePath \"\"" Feb 28 10:28:04 crc kubenswrapper[4996]: I0228 10:28:04.391727 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537908-pkgtv" event={"ID":"9a35f961-f608-48bd-a926-041c44788c9f","Type":"ContainerDied","Data":"16f274ac8d7d1677b6a028e1c9537636b7cdcbd3f3485733004cdb546b90faff"} Feb 28 10:28:04 crc kubenswrapper[4996]: I0228 10:28:04.391780 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16f274ac8d7d1677b6a028e1c9537636b7cdcbd3f3485733004cdb546b90faff" Feb 28 10:28:04 crc kubenswrapper[4996]: I0228 10:28:04.391858 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537908-pkgtv" Feb 28 10:28:04 crc kubenswrapper[4996]: I0228 10:28:04.969868 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537902-vxxgz"] Feb 28 10:28:04 crc kubenswrapper[4996]: I0228 10:28:04.983652 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537902-vxxgz"] Feb 28 10:28:05 crc kubenswrapper[4996]: I0228 10:28:05.050076 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca" path="/var/lib/kubelet/pods/e245ab39-2af1-40eb-92c9-e5e8d5b3e9ca/volumes" Feb 28 10:28:08 crc kubenswrapper[4996]: I0228 10:28:08.032970 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:28:08 crc kubenswrapper[4996]: E0228 10:28:08.033633 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:28:20 crc kubenswrapper[4996]: I0228 10:28:20.033036 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:28:20 crc kubenswrapper[4996]: E0228 10:28:20.034021 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.354341 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zcgws"] Feb 28 10:28:31 crc kubenswrapper[4996]: E0228 10:28:31.355634 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a35f961-f608-48bd-a926-041c44788c9f" containerName="oc" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.355651 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a35f961-f608-48bd-a926-041c44788c9f" containerName="oc" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.355951 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a35f961-f608-48bd-a926-041c44788c9f" containerName="oc" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.358070 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.368282 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zcgws"] Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.458615 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-catalog-content\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.458812 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9dwj\" (UniqueName: \"kubernetes.io/projected/c5e070f5-4b97-4226-9eff-64b15041da75-kube-api-access-c9dwj\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.458868 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-utilities\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.561838 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9dwj\" (UniqueName: \"kubernetes.io/projected/c5e070f5-4b97-4226-9eff-64b15041da75-kube-api-access-c9dwj\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.562427 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-utilities\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.562792 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-utilities\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.562801 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-catalog-content\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.563307 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-catalog-content\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.587059 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9dwj\" (UniqueName: \"kubernetes.io/projected/c5e070f5-4b97-4226-9eff-64b15041da75-kube-api-access-c9dwj\") pod \"certified-operators-zcgws\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:31 crc kubenswrapper[4996]: I0228 10:28:31.689676 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:32 crc kubenswrapper[4996]: I0228 10:28:32.239086 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zcgws"] Feb 28 10:28:32 crc kubenswrapper[4996]: I0228 10:28:32.677895 4996 generic.go:334] "Generic (PLEG): container finished" podID="c5e070f5-4b97-4226-9eff-64b15041da75" containerID="e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad" exitCode=0 Feb 28 10:28:32 crc kubenswrapper[4996]: I0228 10:28:32.678024 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcgws" event={"ID":"c5e070f5-4b97-4226-9eff-64b15041da75","Type":"ContainerDied","Data":"e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad"} Feb 28 10:28:32 crc kubenswrapper[4996]: I0228 10:28:32.678243 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcgws" event={"ID":"c5e070f5-4b97-4226-9eff-64b15041da75","Type":"ContainerStarted","Data":"709456f51b26cdc616ecddd091a00326825d7efb49298361c69a2f3e568d2ec0"} Feb 28 10:28:33 crc kubenswrapper[4996]: I0228 10:28:33.691890 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcgws" event={"ID":"c5e070f5-4b97-4226-9eff-64b15041da75","Type":"ContainerStarted","Data":"b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f"} Feb 28 10:28:35 crc kubenswrapper[4996]: I0228 10:28:35.034076 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:28:35 crc kubenswrapper[4996]: E0228 10:28:35.034561 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:28:35 crc kubenswrapper[4996]: I0228 10:28:35.711167 4996 generic.go:334] "Generic (PLEG): container finished" podID="c5e070f5-4b97-4226-9eff-64b15041da75" containerID="b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f" exitCode=0 Feb 28 10:28:35 crc kubenswrapper[4996]: I0228 10:28:35.711222 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcgws" event={"ID":"c5e070f5-4b97-4226-9eff-64b15041da75","Type":"ContainerDied","Data":"b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f"} Feb 28 10:28:37 crc kubenswrapper[4996]: I0228 10:28:37.737838 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcgws" event={"ID":"c5e070f5-4b97-4226-9eff-64b15041da75","Type":"ContainerStarted","Data":"0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed"} Feb 28 10:28:37 crc kubenswrapper[4996]: I0228 10:28:37.759900 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zcgws" podStartSLOduration=3.343500929 podStartE2EDuration="6.759875591s" podCreationTimestamp="2026-02-28 10:28:31 +0000 UTC" firstStartedPulling="2026-02-28 10:28:32.679956981 +0000 UTC m=+5276.370759792" lastFinishedPulling="2026-02-28 10:28:36.096331603 +0000 UTC m=+5279.787134454" observedRunningTime="2026-02-28 10:28:37.757221176 +0000 UTC m=+5281.448023997" watchObservedRunningTime="2026-02-28 10:28:37.759875591 +0000 UTC m=+5281.450678432" Feb 28 10:28:41 crc kubenswrapper[4996]: I0228 10:28:41.690710 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:41 crc kubenswrapper[4996]: I0228 10:28:41.691287 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:41 crc kubenswrapper[4996]: I0228 10:28:41.751073 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:41 crc kubenswrapper[4996]: I0228 10:28:41.856894 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:41 crc kubenswrapper[4996]: I0228 10:28:41.993934 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zcgws"] Feb 28 10:28:43 crc kubenswrapper[4996]: I0228 10:28:43.801102 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zcgws" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" containerName="registry-server" containerID="cri-o://0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed" gracePeriod=2 Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.322304 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.421407 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-catalog-content\") pod \"c5e070f5-4b97-4226-9eff-64b15041da75\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.421522 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9dwj\" (UniqueName: \"kubernetes.io/projected/c5e070f5-4b97-4226-9eff-64b15041da75-kube-api-access-c9dwj\") pod \"c5e070f5-4b97-4226-9eff-64b15041da75\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.421685 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-utilities\") pod \"c5e070f5-4b97-4226-9eff-64b15041da75\" (UID: \"c5e070f5-4b97-4226-9eff-64b15041da75\") " Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.423536 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-utilities" (OuterVolumeSpecName: "utilities") pod "c5e070f5-4b97-4226-9eff-64b15041da75" (UID: "c5e070f5-4b97-4226-9eff-64b15041da75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.444309 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e070f5-4b97-4226-9eff-64b15041da75-kube-api-access-c9dwj" (OuterVolumeSpecName: "kube-api-access-c9dwj") pod "c5e070f5-4b97-4226-9eff-64b15041da75" (UID: "c5e070f5-4b97-4226-9eff-64b15041da75"). InnerVolumeSpecName "kube-api-access-c9dwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.499591 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5e070f5-4b97-4226-9eff-64b15041da75" (UID: "c5e070f5-4b97-4226-9eff-64b15041da75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.524119 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.524162 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5e070f5-4b97-4226-9eff-64b15041da75-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.524178 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9dwj\" (UniqueName: \"kubernetes.io/projected/c5e070f5-4b97-4226-9eff-64b15041da75-kube-api-access-c9dwj\") on node \"crc\" DevicePath \"\"" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.812887 4996 generic.go:334] "Generic (PLEG): container finished" podID="c5e070f5-4b97-4226-9eff-64b15041da75" containerID="0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed" exitCode=0 Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.812945 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcgws" event={"ID":"c5e070f5-4b97-4226-9eff-64b15041da75","Type":"ContainerDied","Data":"0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed"} Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.812984 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zcgws" event={"ID":"c5e070f5-4b97-4226-9eff-64b15041da75","Type":"ContainerDied","Data":"709456f51b26cdc616ecddd091a00326825d7efb49298361c69a2f3e568d2ec0"} Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.813060 4996 scope.go:117] "RemoveContainer" containerID="0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.813246 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zcgws" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.854568 4996 scope.go:117] "RemoveContainer" containerID="b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.857979 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zcgws"] Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.870376 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zcgws"] Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.897414 4996 scope.go:117] "RemoveContainer" containerID="e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.935982 4996 scope.go:117] "RemoveContainer" containerID="0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed" Feb 28 10:28:44 crc kubenswrapper[4996]: E0228 10:28:44.937669 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed\": container with ID starting with 0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed not found: ID does not exist" containerID="0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.937742 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed"} err="failed to get container status \"0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed\": rpc error: code = NotFound desc = could not find container \"0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed\": container with ID starting with 0b0a99d9969afbaba4948987cf2d87b33bd93d8c63e5ee363f1107630e1533ed not found: ID does not exist" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.937789 4996 scope.go:117] "RemoveContainer" containerID="b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f" Feb 28 10:28:44 crc kubenswrapper[4996]: E0228 10:28:44.938294 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f\": container with ID starting with b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f not found: ID does not exist" containerID="b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.938337 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f"} err="failed to get container status \"b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f\": rpc error: code = NotFound desc = could not find container \"b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f\": container with ID starting with b16b6344ac786d9de3e15f5c56ec6349aad777d8542b2034fb2d2ebce6ef9d6f not found: ID does not exist" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.938361 4996 scope.go:117] "RemoveContainer" containerID="e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad" Feb 28 10:28:44 crc kubenswrapper[4996]: E0228 10:28:44.938861 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad\": container with ID starting with e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad not found: ID does not exist" containerID="e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad" Feb 28 10:28:44 crc kubenswrapper[4996]: I0228 10:28:44.938915 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad"} err="failed to get container status \"e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad\": rpc error: code = NotFound desc = could not find container \"e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad\": container with ID starting with e9c5809c09e4a46515fdd0be348e74301e4385ad7150746dc139265435c63bad not found: ID does not exist" Feb 28 10:28:45 crc kubenswrapper[4996]: I0228 10:28:45.044195 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" path="/var/lib/kubelet/pods/c5e070f5-4b97-4226-9eff-64b15041da75/volumes" Feb 28 10:28:48 crc kubenswrapper[4996]: I0228 10:28:48.034078 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:28:48 crc kubenswrapper[4996]: E0228 10:28:48.034589 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:28:48 crc kubenswrapper[4996]: I0228 10:28:48.594022 4996 scope.go:117] "RemoveContainer" containerID="0051d17da376fe82fb75a0d0e34d871003792584248711d417e24d9bb4fc0f9f" Feb 28 10:29:01 crc kubenswrapper[4996]: I0228 10:29:01.033059 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:29:01 crc kubenswrapper[4996]: E0228 10:29:01.033899 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:29:14 crc kubenswrapper[4996]: I0228 10:29:14.032970 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:29:14 crc kubenswrapper[4996]: E0228 10:29:14.033729 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:29:27 crc kubenswrapper[4996]: I0228 10:29:27.041070 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:29:27 crc kubenswrapper[4996]: E0228 10:29:27.041717 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:29:39 crc kubenswrapper[4996]: I0228 10:29:39.036245 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:29:39 crc kubenswrapper[4996]: E0228 10:29:39.037188 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:29:51 crc kubenswrapper[4996]: I0228 10:29:51.033900 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:29:51 crc kubenswrapper[4996]: E0228 10:29:51.034591 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.162818 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537910-r9bkl"] Feb 28 10:30:00 crc kubenswrapper[4996]: E0228 10:30:00.163847 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" containerName="extract-utilities" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.163866 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" containerName="extract-utilities" Feb 28 10:30:00 crc kubenswrapper[4996]: E0228 10:30:00.163890 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" containerName="extract-content" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.163898 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" containerName="extract-content" Feb 28 10:30:00 crc kubenswrapper[4996]: E0228 10:30:00.163917 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" containerName="registry-server" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.163925 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" containerName="registry-server" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.164162 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e070f5-4b97-4226-9eff-64b15041da75" containerName="registry-server" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.164928 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537910-r9bkl" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.167217 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.167447 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.168036 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.185427 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq"] Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.187309 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.190133 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.190273 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.195218 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537910-r9bkl"] Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.208084 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq"] Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.304805 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19d6c20-abe2-4b10-a110-0a535cd35297-config-volume\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.304921 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq2vb\" (UniqueName: \"kubernetes.io/projected/aeb6d69a-c218-4539-813b-a291b7bbb243-kube-api-access-wq2vb\") pod \"auto-csr-approver-29537910-r9bkl\" (UID: \"aeb6d69a-c218-4539-813b-a291b7bbb243\") " pod="openshift-infra/auto-csr-approver-29537910-r9bkl" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.305003 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvzqm\" (UniqueName: \"kubernetes.io/projected/a19d6c20-abe2-4b10-a110-0a535cd35297-kube-api-access-rvzqm\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.305080 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19d6c20-abe2-4b10-a110-0a535cd35297-secret-volume\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.406586 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq2vb\" (UniqueName: \"kubernetes.io/projected/aeb6d69a-c218-4539-813b-a291b7bbb243-kube-api-access-wq2vb\") pod \"auto-csr-approver-29537910-r9bkl\" (UID: \"aeb6d69a-c218-4539-813b-a291b7bbb243\") " pod="openshift-infra/auto-csr-approver-29537910-r9bkl" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.406681 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvzqm\" (UniqueName: \"kubernetes.io/projected/a19d6c20-abe2-4b10-a110-0a535cd35297-kube-api-access-rvzqm\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.406759 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19d6c20-abe2-4b10-a110-0a535cd35297-secret-volume\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.406844 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19d6c20-abe2-4b10-a110-0a535cd35297-config-volume\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.408416 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19d6c20-abe2-4b10-a110-0a535cd35297-config-volume\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.416779 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19d6c20-abe2-4b10-a110-0a535cd35297-secret-volume\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.425112 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq2vb\" (UniqueName: \"kubernetes.io/projected/aeb6d69a-c218-4539-813b-a291b7bbb243-kube-api-access-wq2vb\") pod \"auto-csr-approver-29537910-r9bkl\" (UID: \"aeb6d69a-c218-4539-813b-a291b7bbb243\") " pod="openshift-infra/auto-csr-approver-29537910-r9bkl" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.429319 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvzqm\" (UniqueName: \"kubernetes.io/projected/a19d6c20-abe2-4b10-a110-0a535cd35297-kube-api-access-rvzqm\") pod \"collect-profiles-29537910-xmqfq\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.487342 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537910-r9bkl" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.507142 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:00 crc kubenswrapper[4996]: I0228 10:30:00.995224 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537910-r9bkl"] Feb 28 10:30:01 crc kubenswrapper[4996]: I0228 10:30:01.000615 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:30:01 crc kubenswrapper[4996]: I0228 10:30:01.073481 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq"] Feb 28 10:30:01 crc kubenswrapper[4996]: W0228 10:30:01.503430 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda19d6c20_abe2_4b10_a110_0a535cd35297.slice/crio-8b806ab2582193ed1deee8827114fef73c5039add9d72e1089f58393d8454ba6 WatchSource:0}: Error finding container 8b806ab2582193ed1deee8827114fef73c5039add9d72e1089f58393d8454ba6: Status 404 returned error can't find the container with id 8b806ab2582193ed1deee8827114fef73c5039add9d72e1089f58393d8454ba6 Feb 28 10:30:01 crc kubenswrapper[4996]: I0228 10:30:01.521363 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" event={"ID":"a19d6c20-abe2-4b10-a110-0a535cd35297","Type":"ContainerStarted","Data":"8b806ab2582193ed1deee8827114fef73c5039add9d72e1089f58393d8454ba6"} Feb 28 10:30:01 crc kubenswrapper[4996]: I0228 10:30:01.522753 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537910-r9bkl" event={"ID":"aeb6d69a-c218-4539-813b-a291b7bbb243","Type":"ContainerStarted","Data":"4ad370ed8722e790cdd9d6405df7592618dbdd2b71f74092b0703bb6b14987ae"} Feb 28 10:30:02 crc kubenswrapper[4996]: I0228 10:30:02.033666 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:30:02 crc kubenswrapper[4996]: E0228 10:30:02.034259 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:30:02 crc kubenswrapper[4996]: I0228 10:30:02.533739 4996 generic.go:334] "Generic (PLEG): container finished" podID="a19d6c20-abe2-4b10-a110-0a535cd35297" containerID="83250618dd248b38578b7336e79fdb5d88803f61bd010c82426fdb6611d4c928" exitCode=0 Feb 28 10:30:02 crc kubenswrapper[4996]: I0228 10:30:02.533779 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" event={"ID":"a19d6c20-abe2-4b10-a110-0a535cd35297","Type":"ContainerDied","Data":"83250618dd248b38578b7336e79fdb5d88803f61bd010c82426fdb6611d4c928"} Feb 28 10:30:03 crc kubenswrapper[4996]: I0228 10:30:03.565840 4996 generic.go:334] "Generic (PLEG): container finished" podID="aeb6d69a-c218-4539-813b-a291b7bbb243" containerID="b31dfef930a477bc4a467f196be272d643d26c84f932caef6c8f06ac2a8cb78e" exitCode=0 Feb 28 10:30:03 crc kubenswrapper[4996]: I0228 10:30:03.566083 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537910-r9bkl" event={"ID":"aeb6d69a-c218-4539-813b-a291b7bbb243","Type":"ContainerDied","Data":"b31dfef930a477bc4a467f196be272d643d26c84f932caef6c8f06ac2a8cb78e"} Feb 28 10:30:03 crc kubenswrapper[4996]: I0228 10:30:03.991433 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.093530 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvzqm\" (UniqueName: \"kubernetes.io/projected/a19d6c20-abe2-4b10-a110-0a535cd35297-kube-api-access-rvzqm\") pod \"a19d6c20-abe2-4b10-a110-0a535cd35297\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.093784 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19d6c20-abe2-4b10-a110-0a535cd35297-secret-volume\") pod \"a19d6c20-abe2-4b10-a110-0a535cd35297\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.093831 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19d6c20-abe2-4b10-a110-0a535cd35297-config-volume\") pod \"a19d6c20-abe2-4b10-a110-0a535cd35297\" (UID: \"a19d6c20-abe2-4b10-a110-0a535cd35297\") " Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.094357 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a19d6c20-abe2-4b10-a110-0a535cd35297-config-volume" (OuterVolumeSpecName: "config-volume") pod "a19d6c20-abe2-4b10-a110-0a535cd35297" (UID: "a19d6c20-abe2-4b10-a110-0a535cd35297"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.100222 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19d6c20-abe2-4b10-a110-0a535cd35297-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a19d6c20-abe2-4b10-a110-0a535cd35297" (UID: "a19d6c20-abe2-4b10-a110-0a535cd35297"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.100586 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19d6c20-abe2-4b10-a110-0a535cd35297-kube-api-access-rvzqm" (OuterVolumeSpecName: "kube-api-access-rvzqm") pod "a19d6c20-abe2-4b10-a110-0a535cd35297" (UID: "a19d6c20-abe2-4b10-a110-0a535cd35297"). InnerVolumeSpecName "kube-api-access-rvzqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.196861 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a19d6c20-abe2-4b10-a110-0a535cd35297-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.196907 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a19d6c20-abe2-4b10-a110-0a535cd35297-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.196921 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvzqm\" (UniqueName: \"kubernetes.io/projected/a19d6c20-abe2-4b10-a110-0a535cd35297-kube-api-access-rvzqm\") on node \"crc\" DevicePath \"\"" Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.577902 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.578088 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq" event={"ID":"a19d6c20-abe2-4b10-a110-0a535cd35297","Type":"ContainerDied","Data":"8b806ab2582193ed1deee8827114fef73c5039add9d72e1089f58393d8454ba6"} Feb 28 10:30:04 crc kubenswrapper[4996]: I0228 10:30:04.578677 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b806ab2582193ed1deee8827114fef73c5039add9d72e1089f58393d8454ba6" Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.085065 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537910-r9bkl" Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.106547 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t"] Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.120380 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537865-hj59t"] Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.232631 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq2vb\" (UniqueName: \"kubernetes.io/projected/aeb6d69a-c218-4539-813b-a291b7bbb243-kube-api-access-wq2vb\") pod \"aeb6d69a-c218-4539-813b-a291b7bbb243\" (UID: \"aeb6d69a-c218-4539-813b-a291b7bbb243\") " Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.248230 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb6d69a-c218-4539-813b-a291b7bbb243-kube-api-access-wq2vb" (OuterVolumeSpecName: "kube-api-access-wq2vb") pod "aeb6d69a-c218-4539-813b-a291b7bbb243" (UID: "aeb6d69a-c218-4539-813b-a291b7bbb243"). InnerVolumeSpecName "kube-api-access-wq2vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.335470 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq2vb\" (UniqueName: \"kubernetes.io/projected/aeb6d69a-c218-4539-813b-a291b7bbb243-kube-api-access-wq2vb\") on node \"crc\" DevicePath \"\"" Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.597327 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537910-r9bkl" event={"ID":"aeb6d69a-c218-4539-813b-a291b7bbb243","Type":"ContainerDied","Data":"4ad370ed8722e790cdd9d6405df7592618dbdd2b71f74092b0703bb6b14987ae"} Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.597401 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ad370ed8722e790cdd9d6405df7592618dbdd2b71f74092b0703bb6b14987ae" Feb 28 10:30:05 crc kubenswrapper[4996]: I0228 10:30:05.597496 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537910-r9bkl" Feb 28 10:30:06 crc kubenswrapper[4996]: I0228 10:30:06.155926 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537904-dwdhl"] Feb 28 10:30:06 crc kubenswrapper[4996]: I0228 10:30:06.169261 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537904-dwdhl"] Feb 28 10:30:07 crc kubenswrapper[4996]: I0228 10:30:07.053225 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df4bb141-ded6-4298-87fb-b470bf1993ed" path="/var/lib/kubelet/pods/df4bb141-ded6-4298-87fb-b470bf1993ed/volumes" Feb 28 10:30:07 crc kubenswrapper[4996]: I0228 10:30:07.054986 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffa9b75d-52fd-4522-b097-0c88036f0fa1" path="/var/lib/kubelet/pods/ffa9b75d-52fd-4522-b097-0c88036f0fa1/volumes" Feb 28 10:30:13 crc kubenswrapper[4996]: I0228 10:30:13.033820 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:30:13 crc kubenswrapper[4996]: E0228 10:30:13.034597 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:30:26 crc kubenswrapper[4996]: I0228 10:30:26.033475 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:30:26 crc kubenswrapper[4996]: E0228 10:30:26.034133 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:30:40 crc kubenswrapper[4996]: I0228 10:30:40.033996 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:30:40 crc kubenswrapper[4996]: E0228 10:30:40.035245 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:30:48 crc kubenswrapper[4996]: I0228 10:30:48.711903 4996 scope.go:117] "RemoveContainer" containerID="e41dba95c1c2a19ca502be56fe8e6d4c99d6d23ef8965c5ce43ca7027c6a32a8" Feb 28 10:30:48 crc kubenswrapper[4996]: I0228 10:30:48.762931 4996 scope.go:117] "RemoveContainer" containerID="4500134914afcc7fb50ece710f28eceddee5d04baa5c4d66318df863e4e8a76c" Feb 28 10:30:52 crc kubenswrapper[4996]: I0228 10:30:52.033884 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:30:53 crc kubenswrapper[4996]: I0228 10:30:53.015759 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"b1f772f1fbd88221bf8998eab224ce8c51bde8ffca295bec9fa759cbc6c1a8b5"} Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.397907 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xkqcx"] Feb 28 10:31:20 crc kubenswrapper[4996]: E0228 10:31:20.398907 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb6d69a-c218-4539-813b-a291b7bbb243" containerName="oc" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.398924 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb6d69a-c218-4539-813b-a291b7bbb243" containerName="oc" Feb 28 10:31:20 crc kubenswrapper[4996]: E0228 10:31:20.398937 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19d6c20-abe2-4b10-a110-0a535cd35297" containerName="collect-profiles" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.398944 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19d6c20-abe2-4b10-a110-0a535cd35297" containerName="collect-profiles" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.399229 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19d6c20-abe2-4b10-a110-0a535cd35297" containerName="collect-profiles" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.399266 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb6d69a-c218-4539-813b-a291b7bbb243" containerName="oc" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.401045 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.440266 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xkqcx"] Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.483794 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-catalog-content\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.484249 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx26x\" (UniqueName: \"kubernetes.io/projected/e3b974e5-af5d-433c-8d76-1f4662e64a20-kube-api-access-dx26x\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.484523 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-utilities\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.587186 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-utilities\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.587743 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-utilities\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.588037 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-catalog-content\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.588362 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-catalog-content\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.588597 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx26x\" (UniqueName: \"kubernetes.io/projected/e3b974e5-af5d-433c-8d76-1f4662e64a20-kube-api-access-dx26x\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.611981 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx26x\" (UniqueName: \"kubernetes.io/projected/e3b974e5-af5d-433c-8d76-1f4662e64a20-kube-api-access-dx26x\") pod \"community-operators-xkqcx\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:20 crc kubenswrapper[4996]: I0228 10:31:20.727528 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:21 crc kubenswrapper[4996]: I0228 10:31:21.232758 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xkqcx"] Feb 28 10:31:21 crc kubenswrapper[4996]: I0228 10:31:21.521815 4996 generic.go:334] "Generic (PLEG): container finished" podID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerID="cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4" exitCode=0 Feb 28 10:31:21 crc kubenswrapper[4996]: I0228 10:31:21.522075 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkqcx" event={"ID":"e3b974e5-af5d-433c-8d76-1f4662e64a20","Type":"ContainerDied","Data":"cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4"} Feb 28 10:31:21 crc kubenswrapper[4996]: I0228 10:31:21.522222 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkqcx" event={"ID":"e3b974e5-af5d-433c-8d76-1f4662e64a20","Type":"ContainerStarted","Data":"7884d49143a7c2b833c1b507a0ea32974c0c35f483941c5d75fad363e304edfe"} Feb 28 10:31:22 crc kubenswrapper[4996]: I0228 10:31:22.534654 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkqcx" event={"ID":"e3b974e5-af5d-433c-8d76-1f4662e64a20","Type":"ContainerStarted","Data":"f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17"} Feb 28 10:31:24 crc kubenswrapper[4996]: I0228 10:31:24.554299 4996 generic.go:334] "Generic (PLEG): container finished" podID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerID="f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17" exitCode=0 Feb 28 10:31:24 crc kubenswrapper[4996]: I0228 10:31:24.554646 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkqcx" event={"ID":"e3b974e5-af5d-433c-8d76-1f4662e64a20","Type":"ContainerDied","Data":"f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17"} Feb 28 10:31:25 crc kubenswrapper[4996]: I0228 10:31:25.569061 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkqcx" event={"ID":"e3b974e5-af5d-433c-8d76-1f4662e64a20","Type":"ContainerStarted","Data":"390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4"} Feb 28 10:31:25 crc kubenswrapper[4996]: I0228 10:31:25.594149 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xkqcx" podStartSLOduration=2.097043218 podStartE2EDuration="5.594128686s" podCreationTimestamp="2026-02-28 10:31:20 +0000 UTC" firstStartedPulling="2026-02-28 10:31:21.525716282 +0000 UTC m=+5445.216519133" lastFinishedPulling="2026-02-28 10:31:25.02280176 +0000 UTC m=+5448.713604601" observedRunningTime="2026-02-28 10:31:25.591550832 +0000 UTC m=+5449.282353643" watchObservedRunningTime="2026-02-28 10:31:25.594128686 +0000 UTC m=+5449.284931507" Feb 28 10:31:30 crc kubenswrapper[4996]: I0228 10:31:30.727665 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:30 crc kubenswrapper[4996]: I0228 10:31:30.728115 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:30 crc kubenswrapper[4996]: I0228 10:31:30.791338 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:31 crc kubenswrapper[4996]: I0228 10:31:31.671948 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:31 crc kubenswrapper[4996]: I0228 10:31:31.723978 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xkqcx"] Feb 28 10:31:33 crc kubenswrapper[4996]: I0228 10:31:33.639962 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xkqcx" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerName="registry-server" containerID="cri-o://390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4" gracePeriod=2 Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.218433 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.301733 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx26x\" (UniqueName: \"kubernetes.io/projected/e3b974e5-af5d-433c-8d76-1f4662e64a20-kube-api-access-dx26x\") pod \"e3b974e5-af5d-433c-8d76-1f4662e64a20\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.301799 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-utilities\") pod \"e3b974e5-af5d-433c-8d76-1f4662e64a20\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.301847 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-catalog-content\") pod \"e3b974e5-af5d-433c-8d76-1f4662e64a20\" (UID: \"e3b974e5-af5d-433c-8d76-1f4662e64a20\") " Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.302982 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-utilities" (OuterVolumeSpecName: "utilities") pod "e3b974e5-af5d-433c-8d76-1f4662e64a20" (UID: "e3b974e5-af5d-433c-8d76-1f4662e64a20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.314223 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b974e5-af5d-433c-8d76-1f4662e64a20-kube-api-access-dx26x" (OuterVolumeSpecName: "kube-api-access-dx26x") pod "e3b974e5-af5d-433c-8d76-1f4662e64a20" (UID: "e3b974e5-af5d-433c-8d76-1f4662e64a20"). InnerVolumeSpecName "kube-api-access-dx26x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.352657 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3b974e5-af5d-433c-8d76-1f4662e64a20" (UID: "e3b974e5-af5d-433c-8d76-1f4662e64a20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.404255 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.404306 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx26x\" (UniqueName: \"kubernetes.io/projected/e3b974e5-af5d-433c-8d76-1f4662e64a20-kube-api-access-dx26x\") on node \"crc\" DevicePath \"\"" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.404330 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3b974e5-af5d-433c-8d76-1f4662e64a20-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.649377 4996 generic.go:334] "Generic (PLEG): container finished" podID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerID="390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4" exitCode=0 Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.649425 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xkqcx" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.649440 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkqcx" event={"ID":"e3b974e5-af5d-433c-8d76-1f4662e64a20","Type":"ContainerDied","Data":"390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4"} Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.649825 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xkqcx" event={"ID":"e3b974e5-af5d-433c-8d76-1f4662e64a20","Type":"ContainerDied","Data":"7884d49143a7c2b833c1b507a0ea32974c0c35f483941c5d75fad363e304edfe"} Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.649841 4996 scope.go:117] "RemoveContainer" containerID="390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.684705 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xkqcx"] Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.694035 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xkqcx"] Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.695925 4996 scope.go:117] "RemoveContainer" containerID="f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.728956 4996 scope.go:117] "RemoveContainer" containerID="cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.779250 4996 scope.go:117] "RemoveContainer" containerID="390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4" Feb 28 10:31:34 crc kubenswrapper[4996]: E0228 10:31:34.779889 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4\": container with ID starting with 390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4 not found: ID does not exist" containerID="390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.779954 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4"} err="failed to get container status \"390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4\": rpc error: code = NotFound desc = could not find container \"390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4\": container with ID starting with 390c56ecab66a972561132fb3b9e91a3db5732be6feb9ae7b07d24b9d624ffd4 not found: ID does not exist" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.779991 4996 scope.go:117] "RemoveContainer" containerID="f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17" Feb 28 10:31:34 crc kubenswrapper[4996]: E0228 10:31:34.780526 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17\": container with ID starting with f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17 not found: ID does not exist" containerID="f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.780561 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17"} err="failed to get container status \"f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17\": rpc error: code = NotFound desc = could not find container \"f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17\": container with ID starting with f2a971b1509ab1423e6f5a4900cc6189e6cb7747cdf62b492b7893c6e62a6a17 not found: ID does not exist" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.780583 4996 scope.go:117] "RemoveContainer" containerID="cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4" Feb 28 10:31:34 crc kubenswrapper[4996]: E0228 10:31:34.781055 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4\": container with ID starting with cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4 not found: ID does not exist" containerID="cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4" Feb 28 10:31:34 crc kubenswrapper[4996]: I0228 10:31:34.781092 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4"} err="failed to get container status \"cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4\": rpc error: code = NotFound desc = could not find container \"cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4\": container with ID starting with cca2b46b53c29a928d77ea34fee8b0738d7953ad3de50f803cd270a18806f8c4 not found: ID does not exist" Feb 28 10:31:35 crc kubenswrapper[4996]: I0228 10:31:35.045054 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" path="/var/lib/kubelet/pods/e3b974e5-af5d-433c-8d76-1f4662e64a20/volumes" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.142713 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537912-cfn8s"] Feb 28 10:32:00 crc kubenswrapper[4996]: E0228 10:32:00.143521 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerName="extract-utilities" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.143534 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerName="extract-utilities" Feb 28 10:32:00 crc kubenswrapper[4996]: E0228 10:32:00.143552 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerName="registry-server" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.143559 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerName="registry-server" Feb 28 10:32:00 crc kubenswrapper[4996]: E0228 10:32:00.143576 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerName="extract-content" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.143583 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerName="extract-content" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.143758 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b974e5-af5d-433c-8d76-1f4662e64a20" containerName="registry-server" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.144387 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537912-cfn8s" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.147059 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.147187 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.147304 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.165605 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537912-cfn8s"] Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.272636 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wmlg\" (UniqueName: \"kubernetes.io/projected/9227a54e-0f12-4a36-8321-db0b176c6a4c-kube-api-access-7wmlg\") pod \"auto-csr-approver-29537912-cfn8s\" (UID: \"9227a54e-0f12-4a36-8321-db0b176c6a4c\") " pod="openshift-infra/auto-csr-approver-29537912-cfn8s" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.375735 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wmlg\" (UniqueName: \"kubernetes.io/projected/9227a54e-0f12-4a36-8321-db0b176c6a4c-kube-api-access-7wmlg\") pod \"auto-csr-approver-29537912-cfn8s\" (UID: \"9227a54e-0f12-4a36-8321-db0b176c6a4c\") " pod="openshift-infra/auto-csr-approver-29537912-cfn8s" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.402761 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wmlg\" (UniqueName: \"kubernetes.io/projected/9227a54e-0f12-4a36-8321-db0b176c6a4c-kube-api-access-7wmlg\") pod \"auto-csr-approver-29537912-cfn8s\" (UID: \"9227a54e-0f12-4a36-8321-db0b176c6a4c\") " pod="openshift-infra/auto-csr-approver-29537912-cfn8s" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.475549 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537912-cfn8s" Feb 28 10:32:00 crc kubenswrapper[4996]: I0228 10:32:00.932069 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537912-cfn8s"] Feb 28 10:32:01 crc kubenswrapper[4996]: I0228 10:32:01.925240 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537912-cfn8s" event={"ID":"9227a54e-0f12-4a36-8321-db0b176c6a4c","Type":"ContainerStarted","Data":"38a8cf3a9e1719c5dbbfb6dd0c914796e1e1b4f7493be35563d5681d2a41120f"} Feb 28 10:32:02 crc kubenswrapper[4996]: I0228 10:32:02.939831 4996 generic.go:334] "Generic (PLEG): container finished" podID="9227a54e-0f12-4a36-8321-db0b176c6a4c" containerID="60ee07732f85e51562754e2baaf271c16460df56f265fa4d88228b5476d349b8" exitCode=0 Feb 28 10:32:02 crc kubenswrapper[4996]: I0228 10:32:02.939880 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537912-cfn8s" event={"ID":"9227a54e-0f12-4a36-8321-db0b176c6a4c","Type":"ContainerDied","Data":"60ee07732f85e51562754e2baaf271c16460df56f265fa4d88228b5476d349b8"} Feb 28 10:32:04 crc kubenswrapper[4996]: I0228 10:32:04.335319 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537912-cfn8s" Feb 28 10:32:04 crc kubenswrapper[4996]: I0228 10:32:04.461795 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wmlg\" (UniqueName: \"kubernetes.io/projected/9227a54e-0f12-4a36-8321-db0b176c6a4c-kube-api-access-7wmlg\") pod \"9227a54e-0f12-4a36-8321-db0b176c6a4c\" (UID: \"9227a54e-0f12-4a36-8321-db0b176c6a4c\") " Feb 28 10:32:04 crc kubenswrapper[4996]: I0228 10:32:04.472930 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9227a54e-0f12-4a36-8321-db0b176c6a4c-kube-api-access-7wmlg" (OuterVolumeSpecName: "kube-api-access-7wmlg") pod "9227a54e-0f12-4a36-8321-db0b176c6a4c" (UID: "9227a54e-0f12-4a36-8321-db0b176c6a4c"). InnerVolumeSpecName "kube-api-access-7wmlg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:32:04 crc kubenswrapper[4996]: I0228 10:32:04.565973 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wmlg\" (UniqueName: \"kubernetes.io/projected/9227a54e-0f12-4a36-8321-db0b176c6a4c-kube-api-access-7wmlg\") on node \"crc\" DevicePath \"\"" Feb 28 10:32:04 crc kubenswrapper[4996]: I0228 10:32:04.962809 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537912-cfn8s" event={"ID":"9227a54e-0f12-4a36-8321-db0b176c6a4c","Type":"ContainerDied","Data":"38a8cf3a9e1719c5dbbfb6dd0c914796e1e1b4f7493be35563d5681d2a41120f"} Feb 28 10:32:04 crc kubenswrapper[4996]: I0228 10:32:04.962862 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38a8cf3a9e1719c5dbbfb6dd0c914796e1e1b4f7493be35563d5681d2a41120f" Feb 28 10:32:04 crc kubenswrapper[4996]: I0228 10:32:04.963283 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537912-cfn8s" Feb 28 10:32:05 crc kubenswrapper[4996]: I0228 10:32:05.437768 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537906-7f5rs"] Feb 28 10:32:05 crc kubenswrapper[4996]: I0228 10:32:05.448174 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537906-7f5rs"] Feb 28 10:32:07 crc kubenswrapper[4996]: I0228 10:32:07.047670 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a609084e-320b-42fe-85b0-40cc965ed629" path="/var/lib/kubelet/pods/a609084e-320b-42fe-85b0-40cc965ed629/volumes" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.461872 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8m56h"] Feb 28 10:32:19 crc kubenswrapper[4996]: E0228 10:32:19.463377 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9227a54e-0f12-4a36-8321-db0b176c6a4c" containerName="oc" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.463405 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9227a54e-0f12-4a36-8321-db0b176c6a4c" containerName="oc" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.463913 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9227a54e-0f12-4a36-8321-db0b176c6a4c" containerName="oc" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.466693 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.500835 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8m56h"] Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.639973 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-catalog-content\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.640879 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-utilities\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.640937 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljwv4\" (UniqueName: \"kubernetes.io/projected/580e7924-9a5a-4047-b801-fe4ba84a1523-kube-api-access-ljwv4\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.742855 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-catalog-content\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.743055 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-utilities\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.743099 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljwv4\" (UniqueName: \"kubernetes.io/projected/580e7924-9a5a-4047-b801-fe4ba84a1523-kube-api-access-ljwv4\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.743764 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-catalog-content\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.744246 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-utilities\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.766100 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljwv4\" (UniqueName: \"kubernetes.io/projected/580e7924-9a5a-4047-b801-fe4ba84a1523-kube-api-access-ljwv4\") pod \"redhat-marketplace-8m56h\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:19 crc kubenswrapper[4996]: I0228 10:32:19.796404 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.050366 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dvvlv"] Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.053500 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.091320 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvvlv"] Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.252443 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-utilities\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.252537 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57dvc\" (UniqueName: \"kubernetes.io/projected/c44eff3f-2c33-4798-abd6-2bee059133fd-kube-api-access-57dvc\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.252625 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-catalog-content\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.326641 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8m56h"] Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.353582 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-catalog-content\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.353656 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-utilities\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.353718 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57dvc\" (UniqueName: \"kubernetes.io/projected/c44eff3f-2c33-4798-abd6-2bee059133fd-kube-api-access-57dvc\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.354520 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-catalog-content\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.354736 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-utilities\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.371182 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57dvc\" (UniqueName: \"kubernetes.io/projected/c44eff3f-2c33-4798-abd6-2bee059133fd-kube-api-access-57dvc\") pod \"redhat-operators-dvvlv\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.389107 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:20 crc kubenswrapper[4996]: I0228 10:32:20.915799 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvvlv"] Feb 28 10:32:21 crc kubenswrapper[4996]: I0228 10:32:21.107988 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvvlv" event={"ID":"c44eff3f-2c33-4798-abd6-2bee059133fd","Type":"ContainerStarted","Data":"52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df"} Feb 28 10:32:21 crc kubenswrapper[4996]: I0228 10:32:21.108198 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvvlv" event={"ID":"c44eff3f-2c33-4798-abd6-2bee059133fd","Type":"ContainerStarted","Data":"12e63c65d3b2920706ce958ca5152d563f2620e95a5de6168eafdb52c7f1ba44"} Feb 28 10:32:21 crc kubenswrapper[4996]: I0228 10:32:21.110803 4996 generic.go:334] "Generic (PLEG): container finished" podID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerID="fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b" exitCode=0 Feb 28 10:32:21 crc kubenswrapper[4996]: I0228 10:32:21.110864 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8m56h" event={"ID":"580e7924-9a5a-4047-b801-fe4ba84a1523","Type":"ContainerDied","Data":"fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b"} Feb 28 10:32:21 crc kubenswrapper[4996]: I0228 10:32:21.110897 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8m56h" event={"ID":"580e7924-9a5a-4047-b801-fe4ba84a1523","Type":"ContainerStarted","Data":"1b540dc05215658eaa79f9cf57213efffd595d8cb886e7cd985b76d007abddb3"} Feb 28 10:32:22 crc kubenswrapper[4996]: I0228 10:32:22.120963 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8m56h" event={"ID":"580e7924-9a5a-4047-b801-fe4ba84a1523","Type":"ContainerStarted","Data":"62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90"} Feb 28 10:32:22 crc kubenswrapper[4996]: I0228 10:32:22.122446 4996 generic.go:334] "Generic (PLEG): container finished" podID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerID="52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df" exitCode=0 Feb 28 10:32:22 crc kubenswrapper[4996]: I0228 10:32:22.122498 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvvlv" event={"ID":"c44eff3f-2c33-4798-abd6-2bee059133fd","Type":"ContainerDied","Data":"52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df"} Feb 28 10:32:23 crc kubenswrapper[4996]: I0228 10:32:23.132402 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvvlv" event={"ID":"c44eff3f-2c33-4798-abd6-2bee059133fd","Type":"ContainerStarted","Data":"99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f"} Feb 28 10:32:23 crc kubenswrapper[4996]: I0228 10:32:23.135570 4996 generic.go:334] "Generic (PLEG): container finished" podID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerID="62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90" exitCode=0 Feb 28 10:32:23 crc kubenswrapper[4996]: I0228 10:32:23.135607 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8m56h" event={"ID":"580e7924-9a5a-4047-b801-fe4ba84a1523","Type":"ContainerDied","Data":"62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90"} Feb 28 10:32:25 crc kubenswrapper[4996]: I0228 10:32:25.180230 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8m56h" event={"ID":"580e7924-9a5a-4047-b801-fe4ba84a1523","Type":"ContainerStarted","Data":"f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71"} Feb 28 10:32:25 crc kubenswrapper[4996]: I0228 10:32:25.212469 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8m56h" podStartSLOduration=3.507207423 podStartE2EDuration="6.212442841s" podCreationTimestamp="2026-02-28 10:32:19 +0000 UTC" firstStartedPulling="2026-02-28 10:32:21.114714697 +0000 UTC m=+5504.805517518" lastFinishedPulling="2026-02-28 10:32:23.819950115 +0000 UTC m=+5507.510752936" observedRunningTime="2026-02-28 10:32:25.20263107 +0000 UTC m=+5508.893433901" watchObservedRunningTime="2026-02-28 10:32:25.212442841 +0000 UTC m=+5508.903245672" Feb 28 10:32:29 crc kubenswrapper[4996]: I0228 10:32:29.218710 4996 generic.go:334] "Generic (PLEG): container finished" podID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerID="99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f" exitCode=0 Feb 28 10:32:29 crc kubenswrapper[4996]: I0228 10:32:29.218794 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvvlv" event={"ID":"c44eff3f-2c33-4798-abd6-2bee059133fd","Type":"ContainerDied","Data":"99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f"} Feb 28 10:32:29 crc kubenswrapper[4996]: I0228 10:32:29.797058 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:29 crc kubenswrapper[4996]: I0228 10:32:29.797378 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:29 crc kubenswrapper[4996]: I0228 10:32:29.853845 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:30 crc kubenswrapper[4996]: I0228 10:32:30.231320 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvvlv" event={"ID":"c44eff3f-2c33-4798-abd6-2bee059133fd","Type":"ContainerStarted","Data":"88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149"} Feb 28 10:32:30 crc kubenswrapper[4996]: I0228 10:32:30.262858 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dvvlv" podStartSLOduration=2.452756002 podStartE2EDuration="10.262836226s" podCreationTimestamp="2026-02-28 10:32:20 +0000 UTC" firstStartedPulling="2026-02-28 10:32:22.124380929 +0000 UTC m=+5505.815183740" lastFinishedPulling="2026-02-28 10:32:29.934461153 +0000 UTC m=+5513.625263964" observedRunningTime="2026-02-28 10:32:30.248449903 +0000 UTC m=+5513.939252744" watchObservedRunningTime="2026-02-28 10:32:30.262836226 +0000 UTC m=+5513.953639047" Feb 28 10:32:30 crc kubenswrapper[4996]: I0228 10:32:30.284972 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:30 crc kubenswrapper[4996]: I0228 10:32:30.391135 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:30 crc kubenswrapper[4996]: I0228 10:32:30.391178 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:31 crc kubenswrapper[4996]: I0228 10:32:31.434069 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dvvlv" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="registry-server" probeResult="failure" output=< Feb 28 10:32:31 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:32:31 crc kubenswrapper[4996]: > Feb 28 10:32:31 crc kubenswrapper[4996]: I0228 10:32:31.454514 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8m56h"] Feb 28 10:32:32 crc kubenswrapper[4996]: I0228 10:32:32.249717 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8m56h" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerName="registry-server" containerID="cri-o://f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71" gracePeriod=2 Feb 28 10:32:32 crc kubenswrapper[4996]: I0228 10:32:32.749030 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:32 crc kubenswrapper[4996]: I0228 10:32:32.936211 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljwv4\" (UniqueName: \"kubernetes.io/projected/580e7924-9a5a-4047-b801-fe4ba84a1523-kube-api-access-ljwv4\") pod \"580e7924-9a5a-4047-b801-fe4ba84a1523\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " Feb 28 10:32:32 crc kubenswrapper[4996]: I0228 10:32:32.936671 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-utilities\") pod \"580e7924-9a5a-4047-b801-fe4ba84a1523\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " Feb 28 10:32:32 crc kubenswrapper[4996]: I0228 10:32:32.936946 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-catalog-content\") pod \"580e7924-9a5a-4047-b801-fe4ba84a1523\" (UID: \"580e7924-9a5a-4047-b801-fe4ba84a1523\") " Feb 28 10:32:32 crc kubenswrapper[4996]: I0228 10:32:32.938830 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-utilities" (OuterVolumeSpecName: "utilities") pod "580e7924-9a5a-4047-b801-fe4ba84a1523" (UID: "580e7924-9a5a-4047-b801-fe4ba84a1523"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:32:32 crc kubenswrapper[4996]: I0228 10:32:32.945987 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/580e7924-9a5a-4047-b801-fe4ba84a1523-kube-api-access-ljwv4" (OuterVolumeSpecName: "kube-api-access-ljwv4") pod "580e7924-9a5a-4047-b801-fe4ba84a1523" (UID: "580e7924-9a5a-4047-b801-fe4ba84a1523"). InnerVolumeSpecName "kube-api-access-ljwv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:32:32 crc kubenswrapper[4996]: I0228 10:32:32.980112 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "580e7924-9a5a-4047-b801-fe4ba84a1523" (UID: "580e7924-9a5a-4047-b801-fe4ba84a1523"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.039721 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.039766 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljwv4\" (UniqueName: \"kubernetes.io/projected/580e7924-9a5a-4047-b801-fe4ba84a1523-kube-api-access-ljwv4\") on node \"crc\" DevicePath \"\"" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.039780 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/580e7924-9a5a-4047-b801-fe4ba84a1523-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.264845 4996 generic.go:334] "Generic (PLEG): container finished" podID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerID="f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71" exitCode=0 Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.264885 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8m56h" event={"ID":"580e7924-9a5a-4047-b801-fe4ba84a1523","Type":"ContainerDied","Data":"f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71"} Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.264920 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8m56h" event={"ID":"580e7924-9a5a-4047-b801-fe4ba84a1523","Type":"ContainerDied","Data":"1b540dc05215658eaa79f9cf57213efffd595d8cb886e7cd985b76d007abddb3"} Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.264947 4996 scope.go:117] "RemoveContainer" containerID="f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.264974 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8m56h" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.296758 4996 scope.go:117] "RemoveContainer" containerID="62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.297056 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8m56h"] Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.307641 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8m56h"] Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.338740 4996 scope.go:117] "RemoveContainer" containerID="fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.369697 4996 scope.go:117] "RemoveContainer" containerID="f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71" Feb 28 10:32:33 crc kubenswrapper[4996]: E0228 10:32:33.370142 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71\": container with ID starting with f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71 not found: ID does not exist" containerID="f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.370190 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71"} err="failed to get container status \"f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71\": rpc error: code = NotFound desc = could not find container \"f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71\": container with ID starting with f9f8a50e5e717ab4398c6185da834d37fff7e00dacfaf7789164f266dd208a71 not found: ID does not exist" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.370221 4996 scope.go:117] "RemoveContainer" containerID="62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90" Feb 28 10:32:33 crc kubenswrapper[4996]: E0228 10:32:33.370563 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90\": container with ID starting with 62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90 not found: ID does not exist" containerID="62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.370607 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90"} err="failed to get container status \"62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90\": rpc error: code = NotFound desc = could not find container \"62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90\": container with ID starting with 62a9235024081ebb45b1fd1ababc39ac9c64329ad0fdd5645f66a9ea39ec3e90 not found: ID does not exist" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.370638 4996 scope.go:117] "RemoveContainer" containerID="fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b" Feb 28 10:32:33 crc kubenswrapper[4996]: E0228 10:32:33.370866 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b\": container with ID starting with fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b not found: ID does not exist" containerID="fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b" Feb 28 10:32:33 crc kubenswrapper[4996]: I0228 10:32:33.370893 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b"} err="failed to get container status \"fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b\": rpc error: code = NotFound desc = could not find container \"fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b\": container with ID starting with fbd40284322122ef381569272c2b4cc467cdbff34d4f7100a80eae69bfce920b not found: ID does not exist" Feb 28 10:32:35 crc kubenswrapper[4996]: I0228 10:32:35.048555 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" path="/var/lib/kubelet/pods/580e7924-9a5a-4047-b801-fe4ba84a1523/volumes" Feb 28 10:32:40 crc kubenswrapper[4996]: I0228 10:32:40.447872 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:40 crc kubenswrapper[4996]: I0228 10:32:40.505802 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:40 crc kubenswrapper[4996]: I0228 10:32:40.701911 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvvlv"] Feb 28 10:32:42 crc kubenswrapper[4996]: I0228 10:32:42.353845 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dvvlv" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="registry-server" containerID="cri-o://88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149" gracePeriod=2 Feb 28 10:32:42 crc kubenswrapper[4996]: I0228 10:32:42.879652 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.062803 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-utilities\") pod \"c44eff3f-2c33-4798-abd6-2bee059133fd\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.062895 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57dvc\" (UniqueName: \"kubernetes.io/projected/c44eff3f-2c33-4798-abd6-2bee059133fd-kube-api-access-57dvc\") pod \"c44eff3f-2c33-4798-abd6-2bee059133fd\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.062929 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-catalog-content\") pod \"c44eff3f-2c33-4798-abd6-2bee059133fd\" (UID: \"c44eff3f-2c33-4798-abd6-2bee059133fd\") " Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.063901 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-utilities" (OuterVolumeSpecName: "utilities") pod "c44eff3f-2c33-4798-abd6-2bee059133fd" (UID: "c44eff3f-2c33-4798-abd6-2bee059133fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.068827 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44eff3f-2c33-4798-abd6-2bee059133fd-kube-api-access-57dvc" (OuterVolumeSpecName: "kube-api-access-57dvc") pod "c44eff3f-2c33-4798-abd6-2bee059133fd" (UID: "c44eff3f-2c33-4798-abd6-2bee059133fd"). InnerVolumeSpecName "kube-api-access-57dvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.165464 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.165506 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57dvc\" (UniqueName: \"kubernetes.io/projected/c44eff3f-2c33-4798-abd6-2bee059133fd-kube-api-access-57dvc\") on node \"crc\" DevicePath \"\"" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.194105 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c44eff3f-2c33-4798-abd6-2bee059133fd" (UID: "c44eff3f-2c33-4798-abd6-2bee059133fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.267705 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44eff3f-2c33-4798-abd6-2bee059133fd-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.375245 4996 generic.go:334] "Generic (PLEG): container finished" podID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerID="88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149" exitCode=0 Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.375317 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvvlv" event={"ID":"c44eff3f-2c33-4798-abd6-2bee059133fd","Type":"ContainerDied","Data":"88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149"} Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.375364 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvvlv" event={"ID":"c44eff3f-2c33-4798-abd6-2bee059133fd","Type":"ContainerDied","Data":"12e63c65d3b2920706ce958ca5152d563f2620e95a5de6168eafdb52c7f1ba44"} Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.375407 4996 scope.go:117] "RemoveContainer" containerID="88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.375655 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvvlv" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.404994 4996 scope.go:117] "RemoveContainer" containerID="99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.424102 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvvlv"] Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.433589 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dvvlv"] Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.435926 4996 scope.go:117] "RemoveContainer" containerID="52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.481376 4996 scope.go:117] "RemoveContainer" containerID="88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149" Feb 28 10:32:43 crc kubenswrapper[4996]: E0228 10:32:43.482081 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149\": container with ID starting with 88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149 not found: ID does not exist" containerID="88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.482166 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149"} err="failed to get container status \"88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149\": rpc error: code = NotFound desc = could not find container \"88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149\": container with ID starting with 88af8e3f2c8a958bd96e8e922bcada95b82e8e3f2de68818a0ebf1a8f0f33149 not found: ID does not exist" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.482193 4996 scope.go:117] "RemoveContainer" containerID="99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f" Feb 28 10:32:43 crc kubenswrapper[4996]: E0228 10:32:43.482959 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f\": container with ID starting with 99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f not found: ID does not exist" containerID="99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.482987 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f"} err="failed to get container status \"99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f\": rpc error: code = NotFound desc = could not find container \"99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f\": container with ID starting with 99c12c8ae4492cc36b8b2527daeb44dfb8422bff7759209a2a0d08ef29f7d21f not found: ID does not exist" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.483059 4996 scope.go:117] "RemoveContainer" containerID="52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df" Feb 28 10:32:43 crc kubenswrapper[4996]: E0228 10:32:43.483376 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df\": container with ID starting with 52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df not found: ID does not exist" containerID="52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df" Feb 28 10:32:43 crc kubenswrapper[4996]: I0228 10:32:43.483408 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df"} err="failed to get container status \"52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df\": rpc error: code = NotFound desc = could not find container \"52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df\": container with ID starting with 52d426b67eff799a995378c4ba02c8a1ba45fc88b477ba6d23b6f42b6bc751df not found: ID does not exist" Feb 28 10:32:45 crc kubenswrapper[4996]: I0228 10:32:45.047325 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" path="/var/lib/kubelet/pods/c44eff3f-2c33-4798-abd6-2bee059133fd/volumes" Feb 28 10:32:48 crc kubenswrapper[4996]: I0228 10:32:48.976562 4996 scope.go:117] "RemoveContainer" containerID="a5d7f84be8df1a89fb7aebe4312d91094547a55dcdefb2d4406dfe1a0c3da4ff" Feb 28 10:33:12 crc kubenswrapper[4996]: I0228 10:33:12.249388 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:33:12 crc kubenswrapper[4996]: I0228 10:33:12.250201 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:33:42 crc kubenswrapper[4996]: I0228 10:33:42.248285 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:33:42 crc kubenswrapper[4996]: I0228 10:33:42.248817 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.140949 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537914-6xz47"] Feb 28 10:34:00 crc kubenswrapper[4996]: E0228 10:34:00.141798 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="extract-utilities" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.141810 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="extract-utilities" Feb 28 10:34:00 crc kubenswrapper[4996]: E0228 10:34:00.141828 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="registry-server" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.141835 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="registry-server" Feb 28 10:34:00 crc kubenswrapper[4996]: E0228 10:34:00.141848 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="extract-content" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.141854 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="extract-content" Feb 28 10:34:00 crc kubenswrapper[4996]: E0228 10:34:00.141865 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerName="registry-server" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.141872 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerName="registry-server" Feb 28 10:34:00 crc kubenswrapper[4996]: E0228 10:34:00.141881 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerName="extract-utilities" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.141887 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerName="extract-utilities" Feb 28 10:34:00 crc kubenswrapper[4996]: E0228 10:34:00.141899 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerName="extract-content" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.141905 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerName="extract-content" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.142102 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="580e7924-9a5a-4047-b801-fe4ba84a1523" containerName="registry-server" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.142114 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44eff3f-2c33-4798-abd6-2bee059133fd" containerName="registry-server" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.142751 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537914-6xz47" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.144777 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.144857 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.150791 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.161220 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537914-6xz47"] Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.289200 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9tz\" (UniqueName: \"kubernetes.io/projected/dc3e0236-765e-45e2-af5c-336572497058-kube-api-access-4f9tz\") pod \"auto-csr-approver-29537914-6xz47\" (UID: \"dc3e0236-765e-45e2-af5c-336572497058\") " pod="openshift-infra/auto-csr-approver-29537914-6xz47" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.391182 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f9tz\" (UniqueName: \"kubernetes.io/projected/dc3e0236-765e-45e2-af5c-336572497058-kube-api-access-4f9tz\") pod \"auto-csr-approver-29537914-6xz47\" (UID: \"dc3e0236-765e-45e2-af5c-336572497058\") " pod="openshift-infra/auto-csr-approver-29537914-6xz47" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.413176 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f9tz\" (UniqueName: \"kubernetes.io/projected/dc3e0236-765e-45e2-af5c-336572497058-kube-api-access-4f9tz\") pod \"auto-csr-approver-29537914-6xz47\" (UID: \"dc3e0236-765e-45e2-af5c-336572497058\") " pod="openshift-infra/auto-csr-approver-29537914-6xz47" Feb 28 10:34:00 crc kubenswrapper[4996]: I0228 10:34:00.629673 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537914-6xz47" Feb 28 10:34:01 crc kubenswrapper[4996]: I0228 10:34:01.118228 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537914-6xz47"] Feb 28 10:34:02 crc kubenswrapper[4996]: I0228 10:34:02.133653 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537914-6xz47" event={"ID":"dc3e0236-765e-45e2-af5c-336572497058","Type":"ContainerStarted","Data":"0f49ed9deeadfe2237d70400b313edb7a5c010099ac24439758843270940d98a"} Feb 28 10:34:03 crc kubenswrapper[4996]: I0228 10:34:03.143751 4996 generic.go:334] "Generic (PLEG): container finished" podID="dc3e0236-765e-45e2-af5c-336572497058" containerID="edcca6fb42d2b3d7695db9217517df1cd03837bd70084e3f104368b5ab4b59cb" exitCode=0 Feb 28 10:34:03 crc kubenswrapper[4996]: I0228 10:34:03.143920 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537914-6xz47" event={"ID":"dc3e0236-765e-45e2-af5c-336572497058","Type":"ContainerDied","Data":"edcca6fb42d2b3d7695db9217517df1cd03837bd70084e3f104368b5ab4b59cb"} Feb 28 10:34:04 crc kubenswrapper[4996]: I0228 10:34:04.572555 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537914-6xz47" Feb 28 10:34:04 crc kubenswrapper[4996]: I0228 10:34:04.683968 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f9tz\" (UniqueName: \"kubernetes.io/projected/dc3e0236-765e-45e2-af5c-336572497058-kube-api-access-4f9tz\") pod \"dc3e0236-765e-45e2-af5c-336572497058\" (UID: \"dc3e0236-765e-45e2-af5c-336572497058\") " Feb 28 10:34:04 crc kubenswrapper[4996]: I0228 10:34:04.691999 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3e0236-765e-45e2-af5c-336572497058-kube-api-access-4f9tz" (OuterVolumeSpecName: "kube-api-access-4f9tz") pod "dc3e0236-765e-45e2-af5c-336572497058" (UID: "dc3e0236-765e-45e2-af5c-336572497058"). InnerVolumeSpecName "kube-api-access-4f9tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:34:04 crc kubenswrapper[4996]: I0228 10:34:04.787108 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f9tz\" (UniqueName: \"kubernetes.io/projected/dc3e0236-765e-45e2-af5c-336572497058-kube-api-access-4f9tz\") on node \"crc\" DevicePath \"\"" Feb 28 10:34:05 crc kubenswrapper[4996]: I0228 10:34:05.161645 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537914-6xz47" event={"ID":"dc3e0236-765e-45e2-af5c-336572497058","Type":"ContainerDied","Data":"0f49ed9deeadfe2237d70400b313edb7a5c010099ac24439758843270940d98a"} Feb 28 10:34:05 crc kubenswrapper[4996]: I0228 10:34:05.162069 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f49ed9deeadfe2237d70400b313edb7a5c010099ac24439758843270940d98a" Feb 28 10:34:05 crc kubenswrapper[4996]: I0228 10:34:05.161690 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537914-6xz47" Feb 28 10:34:05 crc kubenswrapper[4996]: I0228 10:34:05.649603 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537908-pkgtv"] Feb 28 10:34:05 crc kubenswrapper[4996]: I0228 10:34:05.657187 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537908-pkgtv"] Feb 28 10:34:07 crc kubenswrapper[4996]: I0228 10:34:07.045856 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a35f961-f608-48bd-a926-041c44788c9f" path="/var/lib/kubelet/pods/9a35f961-f608-48bd-a926-041c44788c9f/volumes" Feb 28 10:34:12 crc kubenswrapper[4996]: I0228 10:34:12.248903 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:34:12 crc kubenswrapper[4996]: I0228 10:34:12.249514 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:34:12 crc kubenswrapper[4996]: I0228 10:34:12.249575 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:34:12 crc kubenswrapper[4996]: I0228 10:34:12.250520 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1f772f1fbd88221bf8998eab224ce8c51bde8ffca295bec9fa759cbc6c1a8b5"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:34:12 crc kubenswrapper[4996]: I0228 10:34:12.250597 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://b1f772f1fbd88221bf8998eab224ce8c51bde8ffca295bec9fa759cbc6c1a8b5" gracePeriod=600 Feb 28 10:34:13 crc kubenswrapper[4996]: I0228 10:34:13.237435 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="b1f772f1fbd88221bf8998eab224ce8c51bde8ffca295bec9fa759cbc6c1a8b5" exitCode=0 Feb 28 10:34:13 crc kubenswrapper[4996]: I0228 10:34:13.237530 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"b1f772f1fbd88221bf8998eab224ce8c51bde8ffca295bec9fa759cbc6c1a8b5"} Feb 28 10:34:13 crc kubenswrapper[4996]: I0228 10:34:13.237951 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b"} Feb 28 10:34:13 crc kubenswrapper[4996]: I0228 10:34:13.237979 4996 scope.go:117] "RemoveContainer" containerID="81be8abef3091abe8de2d13f4af4c67d38e2f134bcb74bb24f3516c943a441b3" Feb 28 10:34:49 crc kubenswrapper[4996]: I0228 10:34:49.146586 4996 scope.go:117] "RemoveContainer" containerID="a3ee880a636a4c131e07116a4d7cd929a9368c174ba8118e1c98f3b902db24e0" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.142818 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537916-pv5q7"] Feb 28 10:36:00 crc kubenswrapper[4996]: E0228 10:36:00.144032 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3e0236-765e-45e2-af5c-336572497058" containerName="oc" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.144052 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3e0236-765e-45e2-af5c-336572497058" containerName="oc" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.144313 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3e0236-765e-45e2-af5c-336572497058" containerName="oc" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.145149 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537916-pv5q7" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.147630 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.147672 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.147749 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.160835 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537916-pv5q7"] Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.297720 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wrq\" (UniqueName: \"kubernetes.io/projected/a68823da-d036-46a7-a656-fc65910c7f55-kube-api-access-p4wrq\") pod \"auto-csr-approver-29537916-pv5q7\" (UID: \"a68823da-d036-46a7-a656-fc65910c7f55\") " pod="openshift-infra/auto-csr-approver-29537916-pv5q7" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.399490 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4wrq\" (UniqueName: \"kubernetes.io/projected/a68823da-d036-46a7-a656-fc65910c7f55-kube-api-access-p4wrq\") pod \"auto-csr-approver-29537916-pv5q7\" (UID: \"a68823da-d036-46a7-a656-fc65910c7f55\") " pod="openshift-infra/auto-csr-approver-29537916-pv5q7" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.421982 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4wrq\" (UniqueName: \"kubernetes.io/projected/a68823da-d036-46a7-a656-fc65910c7f55-kube-api-access-p4wrq\") pod \"auto-csr-approver-29537916-pv5q7\" (UID: \"a68823da-d036-46a7-a656-fc65910c7f55\") " pod="openshift-infra/auto-csr-approver-29537916-pv5q7" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.465319 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537916-pv5q7" Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.937462 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537916-pv5q7"] Feb 28 10:36:00 crc kubenswrapper[4996]: I0228 10:36:00.946024 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:36:01 crc kubenswrapper[4996]: I0228 10:36:01.244275 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537916-pv5q7" event={"ID":"a68823da-d036-46a7-a656-fc65910c7f55","Type":"ContainerStarted","Data":"eaab923143c2543832d80fd2cbe377467bf3feb16e6e3436013af9073ddc0756"} Feb 28 10:36:02 crc kubenswrapper[4996]: I0228 10:36:02.254893 4996 generic.go:334] "Generic (PLEG): container finished" podID="a68823da-d036-46a7-a656-fc65910c7f55" containerID="9035483b83b65d2252429c168f08c1d1be1a4d67a5d97541671809856d47ab48" exitCode=0 Feb 28 10:36:02 crc kubenswrapper[4996]: I0228 10:36:02.255097 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537916-pv5q7" event={"ID":"a68823da-d036-46a7-a656-fc65910c7f55","Type":"ContainerDied","Data":"9035483b83b65d2252429c168f08c1d1be1a4d67a5d97541671809856d47ab48"} Feb 28 10:36:03 crc kubenswrapper[4996]: I0228 10:36:03.628534 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537916-pv5q7" Feb 28 10:36:03 crc kubenswrapper[4996]: I0228 10:36:03.664525 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4wrq\" (UniqueName: \"kubernetes.io/projected/a68823da-d036-46a7-a656-fc65910c7f55-kube-api-access-p4wrq\") pod \"a68823da-d036-46a7-a656-fc65910c7f55\" (UID: \"a68823da-d036-46a7-a656-fc65910c7f55\") " Feb 28 10:36:03 crc kubenswrapper[4996]: I0228 10:36:03.672804 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a68823da-d036-46a7-a656-fc65910c7f55-kube-api-access-p4wrq" (OuterVolumeSpecName: "kube-api-access-p4wrq") pod "a68823da-d036-46a7-a656-fc65910c7f55" (UID: "a68823da-d036-46a7-a656-fc65910c7f55"). InnerVolumeSpecName "kube-api-access-p4wrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:36:03 crc kubenswrapper[4996]: I0228 10:36:03.766417 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4wrq\" (UniqueName: \"kubernetes.io/projected/a68823da-d036-46a7-a656-fc65910c7f55-kube-api-access-p4wrq\") on node \"crc\" DevicePath \"\"" Feb 28 10:36:04 crc kubenswrapper[4996]: I0228 10:36:04.280581 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537916-pv5q7" event={"ID":"a68823da-d036-46a7-a656-fc65910c7f55","Type":"ContainerDied","Data":"eaab923143c2543832d80fd2cbe377467bf3feb16e6e3436013af9073ddc0756"} Feb 28 10:36:04 crc kubenswrapper[4996]: I0228 10:36:04.281324 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaab923143c2543832d80fd2cbe377467bf3feb16e6e3436013af9073ddc0756" Feb 28 10:36:04 crc kubenswrapper[4996]: I0228 10:36:04.280636 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537916-pv5q7" Feb 28 10:36:04 crc kubenswrapper[4996]: I0228 10:36:04.698364 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537910-r9bkl"] Feb 28 10:36:04 crc kubenswrapper[4996]: I0228 10:36:04.708526 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537910-r9bkl"] Feb 28 10:36:05 crc kubenswrapper[4996]: I0228 10:36:05.054569 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb6d69a-c218-4539-813b-a291b7bbb243" path="/var/lib/kubelet/pods/aeb6d69a-c218-4539-813b-a291b7bbb243/volumes" Feb 28 10:36:12 crc kubenswrapper[4996]: I0228 10:36:12.249374 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:36:12 crc kubenswrapper[4996]: I0228 10:36:12.250479 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:36:42 crc kubenswrapper[4996]: I0228 10:36:42.249311 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:36:42 crc kubenswrapper[4996]: I0228 10:36:42.249928 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:36:49 crc kubenswrapper[4996]: I0228 10:36:49.260292 4996 scope.go:117] "RemoveContainer" containerID="b31dfef930a477bc4a467f196be272d643d26c84f932caef6c8f06ac2a8cb78e" Feb 28 10:37:12 crc kubenswrapper[4996]: I0228 10:37:12.250019 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:37:12 crc kubenswrapper[4996]: I0228 10:37:12.250926 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:37:12 crc kubenswrapper[4996]: I0228 10:37:12.250989 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:37:12 crc kubenswrapper[4996]: I0228 10:37:12.253588 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:37:12 crc kubenswrapper[4996]: I0228 10:37:12.253671 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" gracePeriod=600 Feb 28 10:37:12 crc kubenswrapper[4996]: E0228 10:37:12.382260 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:37:13 crc kubenswrapper[4996]: I0228 10:37:13.116441 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" exitCode=0 Feb 28 10:37:13 crc kubenswrapper[4996]: I0228 10:37:13.116514 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b"} Feb 28 10:37:13 crc kubenswrapper[4996]: I0228 10:37:13.116573 4996 scope.go:117] "RemoveContainer" containerID="b1f772f1fbd88221bf8998eab224ce8c51bde8ffca295bec9fa759cbc6c1a8b5" Feb 28 10:37:13 crc kubenswrapper[4996]: I0228 10:37:13.117542 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:37:13 crc kubenswrapper[4996]: E0228 10:37:13.118125 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:37:27 crc kubenswrapper[4996]: I0228 10:37:27.038448 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:37:27 crc kubenswrapper[4996]: E0228 10:37:27.039173 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:37:40 crc kubenswrapper[4996]: I0228 10:37:40.033761 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:37:40 crc kubenswrapper[4996]: E0228 10:37:40.034591 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:37:52 crc kubenswrapper[4996]: I0228 10:37:52.034647 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:37:52 crc kubenswrapper[4996]: E0228 10:37:52.035568 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.145096 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537918-6xgl5"] Feb 28 10:38:00 crc kubenswrapper[4996]: E0228 10:38:00.145928 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a68823da-d036-46a7-a656-fc65910c7f55" containerName="oc" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.145940 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a68823da-d036-46a7-a656-fc65910c7f55" containerName="oc" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.146201 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a68823da-d036-46a7-a656-fc65910c7f55" containerName="oc" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.146808 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537918-6xgl5" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.149240 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.149483 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.150058 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.157472 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537918-6xgl5"] Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.292414 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mbkw\" (UniqueName: \"kubernetes.io/projected/32cee31a-8ec4-4892-9beb-305b1b5a3a6a-kube-api-access-5mbkw\") pod \"auto-csr-approver-29537918-6xgl5\" (UID: \"32cee31a-8ec4-4892-9beb-305b1b5a3a6a\") " pod="openshift-infra/auto-csr-approver-29537918-6xgl5" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.394668 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mbkw\" (UniqueName: \"kubernetes.io/projected/32cee31a-8ec4-4892-9beb-305b1b5a3a6a-kube-api-access-5mbkw\") pod \"auto-csr-approver-29537918-6xgl5\" (UID: \"32cee31a-8ec4-4892-9beb-305b1b5a3a6a\") " pod="openshift-infra/auto-csr-approver-29537918-6xgl5" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.420267 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mbkw\" (UniqueName: \"kubernetes.io/projected/32cee31a-8ec4-4892-9beb-305b1b5a3a6a-kube-api-access-5mbkw\") pod \"auto-csr-approver-29537918-6xgl5\" (UID: \"32cee31a-8ec4-4892-9beb-305b1b5a3a6a\") " pod="openshift-infra/auto-csr-approver-29537918-6xgl5" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.466396 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537918-6xgl5" Feb 28 10:38:00 crc kubenswrapper[4996]: I0228 10:38:00.907286 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537918-6xgl5"] Feb 28 10:38:01 crc kubenswrapper[4996]: I0228 10:38:01.592291 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537918-6xgl5" event={"ID":"32cee31a-8ec4-4892-9beb-305b1b5a3a6a","Type":"ContainerStarted","Data":"3aa7cdbf3b07ab4411c77197bf3a2c02506560e17bc994b193945cdeeed509ee"} Feb 28 10:38:03 crc kubenswrapper[4996]: I0228 10:38:03.610301 4996 generic.go:334] "Generic (PLEG): container finished" podID="32cee31a-8ec4-4892-9beb-305b1b5a3a6a" containerID="2443b599887a0f9300a5a528611ad092e47c1fec47dbb05fba1d3f912e8162e2" exitCode=0 Feb 28 10:38:03 crc kubenswrapper[4996]: I0228 10:38:03.610536 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537918-6xgl5" event={"ID":"32cee31a-8ec4-4892-9beb-305b1b5a3a6a","Type":"ContainerDied","Data":"2443b599887a0f9300a5a528611ad092e47c1fec47dbb05fba1d3f912e8162e2"} Feb 28 10:38:05 crc kubenswrapper[4996]: I0228 10:38:05.006325 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537918-6xgl5" Feb 28 10:38:05 crc kubenswrapper[4996]: I0228 10:38:05.035923 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:38:05 crc kubenswrapper[4996]: E0228 10:38:05.038495 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:38:05 crc kubenswrapper[4996]: I0228 10:38:05.198944 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mbkw\" (UniqueName: \"kubernetes.io/projected/32cee31a-8ec4-4892-9beb-305b1b5a3a6a-kube-api-access-5mbkw\") pod \"32cee31a-8ec4-4892-9beb-305b1b5a3a6a\" (UID: \"32cee31a-8ec4-4892-9beb-305b1b5a3a6a\") " Feb 28 10:38:05 crc kubenswrapper[4996]: I0228 10:38:05.215293 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cee31a-8ec4-4892-9beb-305b1b5a3a6a-kube-api-access-5mbkw" (OuterVolumeSpecName: "kube-api-access-5mbkw") pod "32cee31a-8ec4-4892-9beb-305b1b5a3a6a" (UID: "32cee31a-8ec4-4892-9beb-305b1b5a3a6a"). InnerVolumeSpecName "kube-api-access-5mbkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:38:05 crc kubenswrapper[4996]: I0228 10:38:05.302248 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mbkw\" (UniqueName: \"kubernetes.io/projected/32cee31a-8ec4-4892-9beb-305b1b5a3a6a-kube-api-access-5mbkw\") on node \"crc\" DevicePath \"\"" Feb 28 10:38:05 crc kubenswrapper[4996]: I0228 10:38:05.628842 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537918-6xgl5" event={"ID":"32cee31a-8ec4-4892-9beb-305b1b5a3a6a","Type":"ContainerDied","Data":"3aa7cdbf3b07ab4411c77197bf3a2c02506560e17bc994b193945cdeeed509ee"} Feb 28 10:38:05 crc kubenswrapper[4996]: I0228 10:38:05.628881 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa7cdbf3b07ab4411c77197bf3a2c02506560e17bc994b193945cdeeed509ee" Feb 28 10:38:05 crc kubenswrapper[4996]: I0228 10:38:05.628937 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537918-6xgl5" Feb 28 10:38:06 crc kubenswrapper[4996]: I0228 10:38:06.079782 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537912-cfn8s"] Feb 28 10:38:06 crc kubenswrapper[4996]: I0228 10:38:06.094871 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537912-cfn8s"] Feb 28 10:38:07 crc kubenswrapper[4996]: I0228 10:38:07.042753 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9227a54e-0f12-4a36-8321-db0b176c6a4c" path="/var/lib/kubelet/pods/9227a54e-0f12-4a36-8321-db0b176c6a4c/volumes" Feb 28 10:38:17 crc kubenswrapper[4996]: I0228 10:38:17.041543 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:38:17 crc kubenswrapper[4996]: E0228 10:38:17.042393 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:38:32 crc kubenswrapper[4996]: I0228 10:38:32.032946 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:38:32 crc kubenswrapper[4996]: E0228 10:38:32.033803 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.729231 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d49ss"] Feb 28 10:38:43 crc kubenswrapper[4996]: E0228 10:38:43.730771 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cee31a-8ec4-4892-9beb-305b1b5a3a6a" containerName="oc" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.730786 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cee31a-8ec4-4892-9beb-305b1b5a3a6a" containerName="oc" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.731018 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cee31a-8ec4-4892-9beb-305b1b5a3a6a" containerName="oc" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.732583 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.762625 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d49ss"] Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.885441 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-utilities\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.885809 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kf6q\" (UniqueName: \"kubernetes.io/projected/467bbd1c-a181-48ca-91c4-71b7f179d14f-kube-api-access-7kf6q\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.885980 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-catalog-content\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.987801 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kf6q\" (UniqueName: \"kubernetes.io/projected/467bbd1c-a181-48ca-91c4-71b7f179d14f-kube-api-access-7kf6q\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.987901 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-catalog-content\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.988034 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-utilities\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.988703 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-utilities\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:43 crc kubenswrapper[4996]: I0228 10:38:43.989287 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-catalog-content\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:44 crc kubenswrapper[4996]: I0228 10:38:44.009272 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kf6q\" (UniqueName: \"kubernetes.io/projected/467bbd1c-a181-48ca-91c4-71b7f179d14f-kube-api-access-7kf6q\") pod \"certified-operators-d49ss\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:44 crc kubenswrapper[4996]: I0228 10:38:44.075872 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:44 crc kubenswrapper[4996]: I0228 10:38:44.599631 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d49ss"] Feb 28 10:38:45 crc kubenswrapper[4996]: I0228 10:38:45.029989 4996 generic.go:334] "Generic (PLEG): container finished" podID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerID="1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51" exitCode=0 Feb 28 10:38:45 crc kubenswrapper[4996]: I0228 10:38:45.030157 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d49ss" event={"ID":"467bbd1c-a181-48ca-91c4-71b7f179d14f","Type":"ContainerDied","Data":"1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51"} Feb 28 10:38:45 crc kubenswrapper[4996]: I0228 10:38:45.030292 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d49ss" event={"ID":"467bbd1c-a181-48ca-91c4-71b7f179d14f","Type":"ContainerStarted","Data":"5c8065126094d8263981765d684596d420841955831ebf2d8c6b03dfd24b4969"} Feb 28 10:38:45 crc kubenswrapper[4996]: I0228 10:38:45.033220 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:38:45 crc kubenswrapper[4996]: E0228 10:38:45.033799 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:38:46 crc kubenswrapper[4996]: I0228 10:38:46.039650 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d49ss" event={"ID":"467bbd1c-a181-48ca-91c4-71b7f179d14f","Type":"ContainerStarted","Data":"32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902"} Feb 28 10:38:47 crc kubenswrapper[4996]: I0228 10:38:47.046924 4996 generic.go:334] "Generic (PLEG): container finished" podID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerID="32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902" exitCode=0 Feb 28 10:38:47 crc kubenswrapper[4996]: I0228 10:38:47.046965 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d49ss" event={"ID":"467bbd1c-a181-48ca-91c4-71b7f179d14f","Type":"ContainerDied","Data":"32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902"} Feb 28 10:38:48 crc kubenswrapper[4996]: I0228 10:38:48.058666 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d49ss" event={"ID":"467bbd1c-a181-48ca-91c4-71b7f179d14f","Type":"ContainerStarted","Data":"647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834"} Feb 28 10:38:48 crc kubenswrapper[4996]: I0228 10:38:48.077281 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d49ss" podStartSLOduration=2.499600699 podStartE2EDuration="5.077256792s" podCreationTimestamp="2026-02-28 10:38:43 +0000 UTC" firstStartedPulling="2026-02-28 10:38:45.031906961 +0000 UTC m=+5888.722709772" lastFinishedPulling="2026-02-28 10:38:47.609563054 +0000 UTC m=+5891.300365865" observedRunningTime="2026-02-28 10:38:48.07389555 +0000 UTC m=+5891.764698371" watchObservedRunningTime="2026-02-28 10:38:48.077256792 +0000 UTC m=+5891.768059603" Feb 28 10:38:49 crc kubenswrapper[4996]: I0228 10:38:49.363993 4996 scope.go:117] "RemoveContainer" containerID="60ee07732f85e51562754e2baaf271c16460df56f265fa4d88228b5476d349b8" Feb 28 10:38:54 crc kubenswrapper[4996]: I0228 10:38:54.076619 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:54 crc kubenswrapper[4996]: I0228 10:38:54.077371 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:54 crc kubenswrapper[4996]: I0228 10:38:54.122596 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:54 crc kubenswrapper[4996]: I0228 10:38:54.168494 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:54 crc kubenswrapper[4996]: I0228 10:38:54.360248 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d49ss"] Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.121299 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d49ss" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerName="registry-server" containerID="cri-o://647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834" gracePeriod=2 Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.749570 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.828131 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-utilities\") pod \"467bbd1c-a181-48ca-91c4-71b7f179d14f\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.828316 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-catalog-content\") pod \"467bbd1c-a181-48ca-91c4-71b7f179d14f\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.828369 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kf6q\" (UniqueName: \"kubernetes.io/projected/467bbd1c-a181-48ca-91c4-71b7f179d14f-kube-api-access-7kf6q\") pod \"467bbd1c-a181-48ca-91c4-71b7f179d14f\" (UID: \"467bbd1c-a181-48ca-91c4-71b7f179d14f\") " Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.828919 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-utilities" (OuterVolumeSpecName: "utilities") pod "467bbd1c-a181-48ca-91c4-71b7f179d14f" (UID: "467bbd1c-a181-48ca-91c4-71b7f179d14f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.834313 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/467bbd1c-a181-48ca-91c4-71b7f179d14f-kube-api-access-7kf6q" (OuterVolumeSpecName: "kube-api-access-7kf6q") pod "467bbd1c-a181-48ca-91c4-71b7f179d14f" (UID: "467bbd1c-a181-48ca-91c4-71b7f179d14f"). InnerVolumeSpecName "kube-api-access-7kf6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.930493 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:38:56 crc kubenswrapper[4996]: I0228 10:38:56.930526 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kf6q\" (UniqueName: \"kubernetes.io/projected/467bbd1c-a181-48ca-91c4-71b7f179d14f-kube-api-access-7kf6q\") on node \"crc\" DevicePath \"\"" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.117503 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "467bbd1c-a181-48ca-91c4-71b7f179d14f" (UID: "467bbd1c-a181-48ca-91c4-71b7f179d14f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.133425 4996 generic.go:334] "Generic (PLEG): container finished" podID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerID="647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834" exitCode=0 Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.133488 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d49ss" event={"ID":"467bbd1c-a181-48ca-91c4-71b7f179d14f","Type":"ContainerDied","Data":"647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834"} Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.133550 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d49ss" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.133568 4996 scope.go:117] "RemoveContainer" containerID="647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.133555 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d49ss" event={"ID":"467bbd1c-a181-48ca-91c4-71b7f179d14f","Type":"ContainerDied","Data":"5c8065126094d8263981765d684596d420841955831ebf2d8c6b03dfd24b4969"} Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.134420 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/467bbd1c-a181-48ca-91c4-71b7f179d14f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.161713 4996 scope.go:117] "RemoveContainer" containerID="32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.181936 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d49ss"] Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.192201 4996 scope.go:117] "RemoveContainer" containerID="1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.194933 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d49ss"] Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.243377 4996 scope.go:117] "RemoveContainer" containerID="647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834" Feb 28 10:38:57 crc kubenswrapper[4996]: E0228 10:38:57.243834 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834\": container with ID starting with 647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834 not found: ID does not exist" containerID="647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.243866 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834"} err="failed to get container status \"647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834\": rpc error: code = NotFound desc = could not find container \"647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834\": container with ID starting with 647aa7817fe9f079673ee3b7d9e9285f75ea3bc557ab1f9071e9b88b00163834 not found: ID does not exist" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.243885 4996 scope.go:117] "RemoveContainer" containerID="32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902" Feb 28 10:38:57 crc kubenswrapper[4996]: E0228 10:38:57.244260 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902\": container with ID starting with 32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902 not found: ID does not exist" containerID="32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.244316 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902"} err="failed to get container status \"32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902\": rpc error: code = NotFound desc = could not find container \"32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902\": container with ID starting with 32cc8228ddc91a3e4db1168be2a0cc3b40b78c86b04fca5ede14b90c25a4f902 not found: ID does not exist" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.244353 4996 scope.go:117] "RemoveContainer" containerID="1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51" Feb 28 10:38:57 crc kubenswrapper[4996]: E0228 10:38:57.244719 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51\": container with ID starting with 1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51 not found: ID does not exist" containerID="1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51" Feb 28 10:38:57 crc kubenswrapper[4996]: I0228 10:38:57.244743 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51"} err="failed to get container status \"1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51\": rpc error: code = NotFound desc = could not find container \"1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51\": container with ID starting with 1820119f54f17fe68892cab50450a61fc6e80c36c726d8a2dcb93c01b348eb51 not found: ID does not exist" Feb 28 10:38:59 crc kubenswrapper[4996]: I0228 10:38:59.036028 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:38:59 crc kubenswrapper[4996]: E0228 10:38:59.036512 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:38:59 crc kubenswrapper[4996]: I0228 10:38:59.048304 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" path="/var/lib/kubelet/pods/467bbd1c-a181-48ca-91c4-71b7f179d14f/volumes" Feb 28 10:39:12 crc kubenswrapper[4996]: I0228 10:39:12.033658 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:39:12 crc kubenswrapper[4996]: E0228 10:39:12.034815 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:39:26 crc kubenswrapper[4996]: I0228 10:39:26.034678 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:39:26 crc kubenswrapper[4996]: E0228 10:39:26.035461 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:39:39 crc kubenswrapper[4996]: I0228 10:39:39.033175 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:39:39 crc kubenswrapper[4996]: E0228 10:39:39.033759 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:39:50 crc kubenswrapper[4996]: I0228 10:39:50.033345 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:39:50 crc kubenswrapper[4996]: E0228 10:39:50.034062 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.148660 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537920-jmkj2"] Feb 28 10:40:00 crc kubenswrapper[4996]: E0228 10:40:00.149634 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerName="extract-content" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.149657 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerName="extract-content" Feb 28 10:40:00 crc kubenswrapper[4996]: E0228 10:40:00.149693 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerName="extract-utilities" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.149702 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerName="extract-utilities" Feb 28 10:40:00 crc kubenswrapper[4996]: E0228 10:40:00.149718 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerName="registry-server" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.149726 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerName="registry-server" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.149941 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="467bbd1c-a181-48ca-91c4-71b7f179d14f" containerName="registry-server" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.150801 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537920-jmkj2" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.152877 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.153948 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.153970 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.170760 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537920-jmkj2"] Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.208876 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfr2\" (UniqueName: \"kubernetes.io/projected/d689d7e6-a404-46b2-bf90-3910fd8f7055-kube-api-access-trfr2\") pod \"auto-csr-approver-29537920-jmkj2\" (UID: \"d689d7e6-a404-46b2-bf90-3910fd8f7055\") " pod="openshift-infra/auto-csr-approver-29537920-jmkj2" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.310286 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfr2\" (UniqueName: \"kubernetes.io/projected/d689d7e6-a404-46b2-bf90-3910fd8f7055-kube-api-access-trfr2\") pod \"auto-csr-approver-29537920-jmkj2\" (UID: \"d689d7e6-a404-46b2-bf90-3910fd8f7055\") " pod="openshift-infra/auto-csr-approver-29537920-jmkj2" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.329023 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfr2\" (UniqueName: \"kubernetes.io/projected/d689d7e6-a404-46b2-bf90-3910fd8f7055-kube-api-access-trfr2\") pod \"auto-csr-approver-29537920-jmkj2\" (UID: \"d689d7e6-a404-46b2-bf90-3910fd8f7055\") " pod="openshift-infra/auto-csr-approver-29537920-jmkj2" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.470120 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537920-jmkj2" Feb 28 10:40:00 crc kubenswrapper[4996]: I0228 10:40:00.918558 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537920-jmkj2"] Feb 28 10:40:01 crc kubenswrapper[4996]: I0228 10:40:01.732492 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537920-jmkj2" event={"ID":"d689d7e6-a404-46b2-bf90-3910fd8f7055","Type":"ContainerStarted","Data":"0f99e896860547c4d1df2aeaa838c7fe997fd116a2589171dad2420f8e506234"} Feb 28 10:40:02 crc kubenswrapper[4996]: I0228 10:40:02.745212 4996 generic.go:334] "Generic (PLEG): container finished" podID="d689d7e6-a404-46b2-bf90-3910fd8f7055" containerID="e90ca9817e514c08a60e57a306e777249f4c3438a7967cf9455e4bb9edf17b64" exitCode=0 Feb 28 10:40:02 crc kubenswrapper[4996]: I0228 10:40:02.745351 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537920-jmkj2" event={"ID":"d689d7e6-a404-46b2-bf90-3910fd8f7055","Type":"ContainerDied","Data":"e90ca9817e514c08a60e57a306e777249f4c3438a7967cf9455e4bb9edf17b64"} Feb 28 10:40:04 crc kubenswrapper[4996]: I0228 10:40:04.313456 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537920-jmkj2" Feb 28 10:40:04 crc kubenswrapper[4996]: I0228 10:40:04.439848 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trfr2\" (UniqueName: \"kubernetes.io/projected/d689d7e6-a404-46b2-bf90-3910fd8f7055-kube-api-access-trfr2\") pod \"d689d7e6-a404-46b2-bf90-3910fd8f7055\" (UID: \"d689d7e6-a404-46b2-bf90-3910fd8f7055\") " Feb 28 10:40:04 crc kubenswrapper[4996]: I0228 10:40:04.445798 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d689d7e6-a404-46b2-bf90-3910fd8f7055-kube-api-access-trfr2" (OuterVolumeSpecName: "kube-api-access-trfr2") pod "d689d7e6-a404-46b2-bf90-3910fd8f7055" (UID: "d689d7e6-a404-46b2-bf90-3910fd8f7055"). InnerVolumeSpecName "kube-api-access-trfr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:40:04 crc kubenswrapper[4996]: I0228 10:40:04.542200 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trfr2\" (UniqueName: \"kubernetes.io/projected/d689d7e6-a404-46b2-bf90-3910fd8f7055-kube-api-access-trfr2\") on node \"crc\" DevicePath \"\"" Feb 28 10:40:04 crc kubenswrapper[4996]: I0228 10:40:04.765294 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537920-jmkj2" event={"ID":"d689d7e6-a404-46b2-bf90-3910fd8f7055","Type":"ContainerDied","Data":"0f99e896860547c4d1df2aeaa838c7fe997fd116a2589171dad2420f8e506234"} Feb 28 10:40:04 crc kubenswrapper[4996]: I0228 10:40:04.765341 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f99e896860547c4d1df2aeaa838c7fe997fd116a2589171dad2420f8e506234" Feb 28 10:40:04 crc kubenswrapper[4996]: I0228 10:40:04.765584 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537920-jmkj2" Feb 28 10:40:05 crc kubenswrapper[4996]: I0228 10:40:05.033805 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:40:05 crc kubenswrapper[4996]: E0228 10:40:05.034466 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:40:05 crc kubenswrapper[4996]: I0228 10:40:05.400169 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537914-6xz47"] Feb 28 10:40:05 crc kubenswrapper[4996]: I0228 10:40:05.408381 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537914-6xz47"] Feb 28 10:40:07 crc kubenswrapper[4996]: I0228 10:40:07.049297 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3e0236-765e-45e2-af5c-336572497058" path="/var/lib/kubelet/pods/dc3e0236-765e-45e2-af5c-336572497058/volumes" Feb 28 10:40:19 crc kubenswrapper[4996]: I0228 10:40:19.034158 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:40:19 crc kubenswrapper[4996]: E0228 10:40:19.034990 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:40:32 crc kubenswrapper[4996]: I0228 10:40:32.032810 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:40:32 crc kubenswrapper[4996]: E0228 10:40:32.033738 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:40:46 crc kubenswrapper[4996]: I0228 10:40:46.033866 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:40:46 crc kubenswrapper[4996]: E0228 10:40:46.034558 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:40:49 crc kubenswrapper[4996]: I0228 10:40:49.475315 4996 scope.go:117] "RemoveContainer" containerID="edcca6fb42d2b3d7695db9217517df1cd03837bd70084e3f104368b5ab4b59cb" Feb 28 10:41:00 crc kubenswrapper[4996]: I0228 10:41:00.033743 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:41:00 crc kubenswrapper[4996]: E0228 10:41:00.035528 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:41:15 crc kubenswrapper[4996]: I0228 10:41:15.033280 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:41:15 crc kubenswrapper[4996]: E0228 10:41:15.034243 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:41:26 crc kubenswrapper[4996]: I0228 10:41:26.033976 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:41:26 crc kubenswrapper[4996]: E0228 10:41:26.034757 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.817302 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v5k9t"] Feb 28 10:41:37 crc kubenswrapper[4996]: E0228 10:41:37.818228 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d689d7e6-a404-46b2-bf90-3910fd8f7055" containerName="oc" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.818243 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d689d7e6-a404-46b2-bf90-3910fd8f7055" containerName="oc" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.818428 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d689d7e6-a404-46b2-bf90-3910fd8f7055" containerName="oc" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.819674 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.828732 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5k9t"] Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.830647 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-utilities\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.830739 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-catalog-content\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.830824 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx4hd\" (UniqueName: \"kubernetes.io/projected/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-kube-api-access-bx4hd\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.932207 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-utilities\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.932307 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-catalog-content\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.932607 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx4hd\" (UniqueName: \"kubernetes.io/projected/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-kube-api-access-bx4hd\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.933418 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-catalog-content\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.933456 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-utilities\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:37 crc kubenswrapper[4996]: I0228 10:41:37.958660 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx4hd\" (UniqueName: \"kubernetes.io/projected/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-kube-api-access-bx4hd\") pod \"community-operators-v5k9t\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:38 crc kubenswrapper[4996]: I0228 10:41:38.150514 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:38 crc kubenswrapper[4996]: I0228 10:41:38.689765 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5k9t"] Feb 28 10:41:39 crc kubenswrapper[4996]: I0228 10:41:39.595379 4996 generic.go:334] "Generic (PLEG): container finished" podID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerID="2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68" exitCode=0 Feb 28 10:41:39 crc kubenswrapper[4996]: I0228 10:41:39.595440 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5k9t" event={"ID":"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef","Type":"ContainerDied","Data":"2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68"} Feb 28 10:41:39 crc kubenswrapper[4996]: I0228 10:41:39.595807 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5k9t" event={"ID":"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef","Type":"ContainerStarted","Data":"1d24876481a745c19d91d93cd20bb04b40830302e1dc50e536ef979fd202608b"} Feb 28 10:41:39 crc kubenswrapper[4996]: I0228 10:41:39.598635 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:41:40 crc kubenswrapper[4996]: I0228 10:41:40.033950 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:41:40 crc kubenswrapper[4996]: E0228 10:41:40.034545 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:41:40 crc kubenswrapper[4996]: I0228 10:41:40.607865 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5k9t" event={"ID":"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef","Type":"ContainerStarted","Data":"b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4"} Feb 28 10:41:42 crc kubenswrapper[4996]: I0228 10:41:42.631034 4996 generic.go:334] "Generic (PLEG): container finished" podID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerID="b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4" exitCode=0 Feb 28 10:41:42 crc kubenswrapper[4996]: I0228 10:41:42.631104 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5k9t" event={"ID":"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef","Type":"ContainerDied","Data":"b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4"} Feb 28 10:41:43 crc kubenswrapper[4996]: I0228 10:41:43.645201 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5k9t" event={"ID":"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef","Type":"ContainerStarted","Data":"947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3"} Feb 28 10:41:43 crc kubenswrapper[4996]: I0228 10:41:43.684628 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v5k9t" podStartSLOduration=3.234533592 podStartE2EDuration="6.684603233s" podCreationTimestamp="2026-02-28 10:41:37 +0000 UTC" firstStartedPulling="2026-02-28 10:41:39.598301159 +0000 UTC m=+6063.289103980" lastFinishedPulling="2026-02-28 10:41:43.04837078 +0000 UTC m=+6066.739173621" observedRunningTime="2026-02-28 10:41:43.666810785 +0000 UTC m=+6067.357613626" watchObservedRunningTime="2026-02-28 10:41:43.684603233 +0000 UTC m=+6067.375406054" Feb 28 10:41:48 crc kubenswrapper[4996]: I0228 10:41:48.151214 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:48 crc kubenswrapper[4996]: I0228 10:41:48.151770 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:48 crc kubenswrapper[4996]: I0228 10:41:48.218179 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:48 crc kubenswrapper[4996]: I0228 10:41:48.738700 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:48 crc kubenswrapper[4996]: I0228 10:41:48.801066 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5k9t"] Feb 28 10:41:50 crc kubenswrapper[4996]: I0228 10:41:50.707757 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v5k9t" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerName="registry-server" containerID="cri-o://947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3" gracePeriod=2 Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.168503 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.303480 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-utilities\") pod \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.303568 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-catalog-content\") pod \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.303659 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx4hd\" (UniqueName: \"kubernetes.io/projected/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-kube-api-access-bx4hd\") pod \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\" (UID: \"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef\") " Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.304375 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-utilities" (OuterVolumeSpecName: "utilities") pod "b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" (UID: "b5ab085c-ca7d-48e5-9d08-e9cc8729fcef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.313298 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-kube-api-access-bx4hd" (OuterVolumeSpecName: "kube-api-access-bx4hd") pod "b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" (UID: "b5ab085c-ca7d-48e5-9d08-e9cc8729fcef"). InnerVolumeSpecName "kube-api-access-bx4hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.368633 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" (UID: "b5ab085c-ca7d-48e5-9d08-e9cc8729fcef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.406354 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx4hd\" (UniqueName: \"kubernetes.io/projected/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-kube-api-access-bx4hd\") on node \"crc\" DevicePath \"\"" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.406588 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.406654 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.722291 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5k9t" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.722215 4996 generic.go:334] "Generic (PLEG): container finished" podID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerID="947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3" exitCode=0 Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.722320 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5k9t" event={"ID":"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef","Type":"ContainerDied","Data":"947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3"} Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.722633 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5k9t" event={"ID":"b5ab085c-ca7d-48e5-9d08-e9cc8729fcef","Type":"ContainerDied","Data":"1d24876481a745c19d91d93cd20bb04b40830302e1dc50e536ef979fd202608b"} Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.722664 4996 scope.go:117] "RemoveContainer" containerID="947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.764314 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5k9t"] Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.772025 4996 scope.go:117] "RemoveContainer" containerID="b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.774361 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v5k9t"] Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.803260 4996 scope.go:117] "RemoveContainer" containerID="2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.863684 4996 scope.go:117] "RemoveContainer" containerID="947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3" Feb 28 10:41:51 crc kubenswrapper[4996]: E0228 10:41:51.864349 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3\": container with ID starting with 947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3 not found: ID does not exist" containerID="947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.864408 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3"} err="failed to get container status \"947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3\": rpc error: code = NotFound desc = could not find container \"947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3\": container with ID starting with 947e7794e7c1784c443a258ee8e3793e19813d89f94ad3be01692e9d2c77c2b3 not found: ID does not exist" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.864439 4996 scope.go:117] "RemoveContainer" containerID="b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4" Feb 28 10:41:51 crc kubenswrapper[4996]: E0228 10:41:51.864981 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4\": container with ID starting with b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4 not found: ID does not exist" containerID="b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.865054 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4"} err="failed to get container status \"b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4\": rpc error: code = NotFound desc = could not find container \"b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4\": container with ID starting with b9e98175f11b32ecb494e2b5c840f92660e93bb6ee0505bc6269f888f39dc0b4 not found: ID does not exist" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.865088 4996 scope.go:117] "RemoveContainer" containerID="2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68" Feb 28 10:41:51 crc kubenswrapper[4996]: E0228 10:41:51.865495 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68\": container with ID starting with 2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68 not found: ID does not exist" containerID="2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68" Feb 28 10:41:51 crc kubenswrapper[4996]: I0228 10:41:51.865544 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68"} err="failed to get container status \"2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68\": rpc error: code = NotFound desc = could not find container \"2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68\": container with ID starting with 2b39a31622aab925211404611d8c752bef18863b352fb0a3e60371c0d94e3c68 not found: ID does not exist" Feb 28 10:41:52 crc kubenswrapper[4996]: I0228 10:41:52.033174 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:41:52 crc kubenswrapper[4996]: E0228 10:41:52.033425 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:41:53 crc kubenswrapper[4996]: I0228 10:41:53.043777 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" path="/var/lib/kubelet/pods/b5ab085c-ca7d-48e5-9d08-e9cc8729fcef/volumes" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.150878 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537922-dv499"] Feb 28 10:42:00 crc kubenswrapper[4996]: E0228 10:42:00.151648 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerName="extract-utilities" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.151659 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerName="extract-utilities" Feb 28 10:42:00 crc kubenswrapper[4996]: E0228 10:42:00.151672 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerName="extract-content" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.151679 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerName="extract-content" Feb 28 10:42:00 crc kubenswrapper[4996]: E0228 10:42:00.151698 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerName="registry-server" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.151704 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerName="registry-server" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.151879 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5ab085c-ca7d-48e5-9d08-e9cc8729fcef" containerName="registry-server" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.152502 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537922-dv499" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.154459 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.155069 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.156849 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.170138 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537922-dv499"] Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.278711 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4qt6\" (UniqueName: \"kubernetes.io/projected/49ab8a24-e23d-4ef1-9b9b-436c44e05134-kube-api-access-d4qt6\") pod \"auto-csr-approver-29537922-dv499\" (UID: \"49ab8a24-e23d-4ef1-9b9b-436c44e05134\") " pod="openshift-infra/auto-csr-approver-29537922-dv499" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.380887 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4qt6\" (UniqueName: \"kubernetes.io/projected/49ab8a24-e23d-4ef1-9b9b-436c44e05134-kube-api-access-d4qt6\") pod \"auto-csr-approver-29537922-dv499\" (UID: \"49ab8a24-e23d-4ef1-9b9b-436c44e05134\") " pod="openshift-infra/auto-csr-approver-29537922-dv499" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.407196 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4qt6\" (UniqueName: \"kubernetes.io/projected/49ab8a24-e23d-4ef1-9b9b-436c44e05134-kube-api-access-d4qt6\") pod \"auto-csr-approver-29537922-dv499\" (UID: \"49ab8a24-e23d-4ef1-9b9b-436c44e05134\") " pod="openshift-infra/auto-csr-approver-29537922-dv499" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.469593 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537922-dv499" Feb 28 10:42:00 crc kubenswrapper[4996]: I0228 10:42:00.905166 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537922-dv499"] Feb 28 10:42:01 crc kubenswrapper[4996]: I0228 10:42:01.806240 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537922-dv499" event={"ID":"49ab8a24-e23d-4ef1-9b9b-436c44e05134","Type":"ContainerStarted","Data":"65e7af86e4a80fab45f104203c00daecfff3d7b4fdb257a612c4a6ccf2424191"} Feb 28 10:42:02 crc kubenswrapper[4996]: I0228 10:42:02.826874 4996 generic.go:334] "Generic (PLEG): container finished" podID="49ab8a24-e23d-4ef1-9b9b-436c44e05134" containerID="d278737358f397e629b48ea65364216feb8b992153197417aa28368f5ff35623" exitCode=0 Feb 28 10:42:02 crc kubenswrapper[4996]: I0228 10:42:02.827106 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537922-dv499" event={"ID":"49ab8a24-e23d-4ef1-9b9b-436c44e05134","Type":"ContainerDied","Data":"d278737358f397e629b48ea65364216feb8b992153197417aa28368f5ff35623"} Feb 28 10:42:04 crc kubenswrapper[4996]: I0228 10:42:04.032949 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:42:04 crc kubenswrapper[4996]: E0228 10:42:04.033473 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:42:04 crc kubenswrapper[4996]: I0228 10:42:04.271768 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537922-dv499" Feb 28 10:42:04 crc kubenswrapper[4996]: I0228 10:42:04.364040 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4qt6\" (UniqueName: \"kubernetes.io/projected/49ab8a24-e23d-4ef1-9b9b-436c44e05134-kube-api-access-d4qt6\") pod \"49ab8a24-e23d-4ef1-9b9b-436c44e05134\" (UID: \"49ab8a24-e23d-4ef1-9b9b-436c44e05134\") " Feb 28 10:42:04 crc kubenswrapper[4996]: I0228 10:42:04.390053 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ab8a24-e23d-4ef1-9b9b-436c44e05134-kube-api-access-d4qt6" (OuterVolumeSpecName: "kube-api-access-d4qt6") pod "49ab8a24-e23d-4ef1-9b9b-436c44e05134" (UID: "49ab8a24-e23d-4ef1-9b9b-436c44e05134"). InnerVolumeSpecName "kube-api-access-d4qt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:42:04 crc kubenswrapper[4996]: I0228 10:42:04.467177 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4qt6\" (UniqueName: \"kubernetes.io/projected/49ab8a24-e23d-4ef1-9b9b-436c44e05134-kube-api-access-d4qt6\") on node \"crc\" DevicePath \"\"" Feb 28 10:42:04 crc kubenswrapper[4996]: I0228 10:42:04.845649 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537922-dv499" event={"ID":"49ab8a24-e23d-4ef1-9b9b-436c44e05134","Type":"ContainerDied","Data":"65e7af86e4a80fab45f104203c00daecfff3d7b4fdb257a612c4a6ccf2424191"} Feb 28 10:42:04 crc kubenswrapper[4996]: I0228 10:42:04.845731 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65e7af86e4a80fab45f104203c00daecfff3d7b4fdb257a612c4a6ccf2424191" Feb 28 10:42:04 crc kubenswrapper[4996]: I0228 10:42:04.845793 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537922-dv499" Feb 28 10:42:05 crc kubenswrapper[4996]: I0228 10:42:05.349832 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537916-pv5q7"] Feb 28 10:42:05 crc kubenswrapper[4996]: I0228 10:42:05.358474 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537916-pv5q7"] Feb 28 10:42:07 crc kubenswrapper[4996]: I0228 10:42:07.044394 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a68823da-d036-46a7-a656-fc65910c7f55" path="/var/lib/kubelet/pods/a68823da-d036-46a7-a656-fc65910c7f55/volumes" Feb 28 10:42:15 crc kubenswrapper[4996]: I0228 10:42:15.033735 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:42:15 crc kubenswrapper[4996]: I0228 10:42:15.940987 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"cba82d5d8766174a11993753af3ff89969fd09057c88e8afbd93a1432add2db1"} Feb 28 10:42:27 crc kubenswrapper[4996]: I0228 10:42:27.930889 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8v2pz"] Feb 28 10:42:27 crc kubenswrapper[4996]: E0228 10:42:27.932200 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ab8a24-e23d-4ef1-9b9b-436c44e05134" containerName="oc" Feb 28 10:42:27 crc kubenswrapper[4996]: I0228 10:42:27.932218 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ab8a24-e23d-4ef1-9b9b-436c44e05134" containerName="oc" Feb 28 10:42:27 crc kubenswrapper[4996]: I0228 10:42:27.932444 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ab8a24-e23d-4ef1-9b9b-436c44e05134" containerName="oc" Feb 28 10:42:27 crc kubenswrapper[4996]: I0228 10:42:27.934096 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:27 crc kubenswrapper[4996]: I0228 10:42:27.941146 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8v2pz"] Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.033771 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-catalog-content\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.033950 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kczt\" (UniqueName: \"kubernetes.io/projected/267dd689-f88d-4232-ada5-1d12781607c5-kube-api-access-7kczt\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.034137 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-utilities\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.135496 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kczt\" (UniqueName: \"kubernetes.io/projected/267dd689-f88d-4232-ada5-1d12781607c5-kube-api-access-7kczt\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.135650 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-utilities\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.136422 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-catalog-content\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.136513 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-utilities\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.136791 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-catalog-content\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.162997 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kczt\" (UniqueName: \"kubernetes.io/projected/267dd689-f88d-4232-ada5-1d12781607c5-kube-api-access-7kczt\") pod \"redhat-operators-8v2pz\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.253628 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:28 crc kubenswrapper[4996]: I0228 10:42:28.750670 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8v2pz"] Feb 28 10:42:29 crc kubenswrapper[4996]: I0228 10:42:29.048473 4996 generic.go:334] "Generic (PLEG): container finished" podID="267dd689-f88d-4232-ada5-1d12781607c5" containerID="58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375" exitCode=0 Feb 28 10:42:29 crc kubenswrapper[4996]: I0228 10:42:29.048546 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v2pz" event={"ID":"267dd689-f88d-4232-ada5-1d12781607c5","Type":"ContainerDied","Data":"58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375"} Feb 28 10:42:29 crc kubenswrapper[4996]: I0228 10:42:29.048818 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v2pz" event={"ID":"267dd689-f88d-4232-ada5-1d12781607c5","Type":"ContainerStarted","Data":"ac37e65fb5e95d5e0f520a3b7ea30ba29ae48f1ae6f57cd0c31ead84e4ba4c98"} Feb 28 10:42:30 crc kubenswrapper[4996]: I0228 10:42:30.060205 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v2pz" event={"ID":"267dd689-f88d-4232-ada5-1d12781607c5","Type":"ContainerStarted","Data":"8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b"} Feb 28 10:42:34 crc kubenswrapper[4996]: I0228 10:42:34.109146 4996 generic.go:334] "Generic (PLEG): container finished" podID="267dd689-f88d-4232-ada5-1d12781607c5" containerID="8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b" exitCode=0 Feb 28 10:42:34 crc kubenswrapper[4996]: I0228 10:42:34.109220 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v2pz" event={"ID":"267dd689-f88d-4232-ada5-1d12781607c5","Type":"ContainerDied","Data":"8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b"} Feb 28 10:42:35 crc kubenswrapper[4996]: I0228 10:42:35.123804 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v2pz" event={"ID":"267dd689-f88d-4232-ada5-1d12781607c5","Type":"ContainerStarted","Data":"dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e"} Feb 28 10:42:35 crc kubenswrapper[4996]: I0228 10:42:35.153611 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8v2pz" podStartSLOduration=2.704138115 podStartE2EDuration="8.153590433s" podCreationTimestamp="2026-02-28 10:42:27 +0000 UTC" firstStartedPulling="2026-02-28 10:42:29.050458595 +0000 UTC m=+6112.741261416" lastFinishedPulling="2026-02-28 10:42:34.499910893 +0000 UTC m=+6118.190713734" observedRunningTime="2026-02-28 10:42:35.146702803 +0000 UTC m=+6118.837505614" watchObservedRunningTime="2026-02-28 10:42:35.153590433 +0000 UTC m=+6118.844393244" Feb 28 10:42:38 crc kubenswrapper[4996]: I0228 10:42:38.254681 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:38 crc kubenswrapper[4996]: I0228 10:42:38.255021 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:42:39 crc kubenswrapper[4996]: I0228 10:42:39.310430 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8v2pz" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="registry-server" probeResult="failure" output=< Feb 28 10:42:39 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:42:39 crc kubenswrapper[4996]: > Feb 28 10:42:49 crc kubenswrapper[4996]: I0228 10:42:49.306230 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8v2pz" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="registry-server" probeResult="failure" output=< Feb 28 10:42:49 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:42:49 crc kubenswrapper[4996]: > Feb 28 10:42:49 crc kubenswrapper[4996]: I0228 10:42:49.568056 4996 scope.go:117] "RemoveContainer" containerID="9035483b83b65d2252429c168f08c1d1be1a4d67a5d97541671809856d47ab48" Feb 28 10:42:59 crc kubenswrapper[4996]: I0228 10:42:59.310448 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8v2pz" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="registry-server" probeResult="failure" output=< Feb 28 10:42:59 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:42:59 crc kubenswrapper[4996]: > Feb 28 10:43:08 crc kubenswrapper[4996]: I0228 10:43:08.302222 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:43:08 crc kubenswrapper[4996]: I0228 10:43:08.347021 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:43:08 crc kubenswrapper[4996]: I0228 10:43:08.541988 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8v2pz"] Feb 28 10:43:09 crc kubenswrapper[4996]: I0228 10:43:09.415637 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8v2pz" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="registry-server" containerID="cri-o://dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e" gracePeriod=2 Feb 28 10:43:09 crc kubenswrapper[4996]: I0228 10:43:09.954352 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.098130 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kczt\" (UniqueName: \"kubernetes.io/projected/267dd689-f88d-4232-ada5-1d12781607c5-kube-api-access-7kczt\") pod \"267dd689-f88d-4232-ada5-1d12781607c5\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.098288 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-catalog-content\") pod \"267dd689-f88d-4232-ada5-1d12781607c5\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.098659 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-utilities\") pod \"267dd689-f88d-4232-ada5-1d12781607c5\" (UID: \"267dd689-f88d-4232-ada5-1d12781607c5\") " Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.099301 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-utilities" (OuterVolumeSpecName: "utilities") pod "267dd689-f88d-4232-ada5-1d12781607c5" (UID: "267dd689-f88d-4232-ada5-1d12781607c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.099595 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.114629 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267dd689-f88d-4232-ada5-1d12781607c5-kube-api-access-7kczt" (OuterVolumeSpecName: "kube-api-access-7kczt") pod "267dd689-f88d-4232-ada5-1d12781607c5" (UID: "267dd689-f88d-4232-ada5-1d12781607c5"). InnerVolumeSpecName "kube-api-access-7kczt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.201242 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kczt\" (UniqueName: \"kubernetes.io/projected/267dd689-f88d-4232-ada5-1d12781607c5-kube-api-access-7kczt\") on node \"crc\" DevicePath \"\"" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.239760 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "267dd689-f88d-4232-ada5-1d12781607c5" (UID: "267dd689-f88d-4232-ada5-1d12781607c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.302734 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/267dd689-f88d-4232-ada5-1d12781607c5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.426249 4996 generic.go:334] "Generic (PLEG): container finished" podID="267dd689-f88d-4232-ada5-1d12781607c5" containerID="dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e" exitCode=0 Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.426319 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8v2pz" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.426336 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v2pz" event={"ID":"267dd689-f88d-4232-ada5-1d12781607c5","Type":"ContainerDied","Data":"dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e"} Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.426372 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8v2pz" event={"ID":"267dd689-f88d-4232-ada5-1d12781607c5","Type":"ContainerDied","Data":"ac37e65fb5e95d5e0f520a3b7ea30ba29ae48f1ae6f57cd0c31ead84e4ba4c98"} Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.426398 4996 scope.go:117] "RemoveContainer" containerID="dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.459840 4996 scope.go:117] "RemoveContainer" containerID="8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.462962 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8v2pz"] Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.471030 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8v2pz"] Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.488983 4996 scope.go:117] "RemoveContainer" containerID="58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.526813 4996 scope.go:117] "RemoveContainer" containerID="dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e" Feb 28 10:43:10 crc kubenswrapper[4996]: E0228 10:43:10.527233 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e\": container with ID starting with dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e not found: ID does not exist" containerID="dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.527268 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e"} err="failed to get container status \"dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e\": rpc error: code = NotFound desc = could not find container \"dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e\": container with ID starting with dae5cbcc389bdff6150f488ec797a59677ec79213ff8cb77c6f71f8d36c2f30e not found: ID does not exist" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.527289 4996 scope.go:117] "RemoveContainer" containerID="8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b" Feb 28 10:43:10 crc kubenswrapper[4996]: E0228 10:43:10.527670 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b\": container with ID starting with 8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b not found: ID does not exist" containerID="8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.527701 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b"} err="failed to get container status \"8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b\": rpc error: code = NotFound desc = could not find container \"8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b\": container with ID starting with 8d0fca6f73d2ffe4e406361fec72962b5187efac8f6857fe37b029db56ad5c0b not found: ID does not exist" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.527718 4996 scope.go:117] "RemoveContainer" containerID="58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375" Feb 28 10:43:10 crc kubenswrapper[4996]: E0228 10:43:10.527953 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375\": container with ID starting with 58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375 not found: ID does not exist" containerID="58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375" Feb 28 10:43:10 crc kubenswrapper[4996]: I0228 10:43:10.527981 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375"} err="failed to get container status \"58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375\": rpc error: code = NotFound desc = could not find container \"58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375\": container with ID starting with 58816291edff8d36889ca836703cf49becf90d0ab978ddeaa0dd5239107a2375 not found: ID does not exist" Feb 28 10:43:11 crc kubenswrapper[4996]: I0228 10:43:11.050581 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267dd689-f88d-4232-ada5-1d12781607c5" path="/var/lib/kubelet/pods/267dd689-f88d-4232-ada5-1d12781607c5/volumes" Feb 28 10:43:12 crc kubenswrapper[4996]: I0228 10:43:12.950394 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qfpfj"] Feb 28 10:43:12 crc kubenswrapper[4996]: E0228 10:43:12.951098 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="registry-server" Feb 28 10:43:12 crc kubenswrapper[4996]: I0228 10:43:12.951114 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="registry-server" Feb 28 10:43:12 crc kubenswrapper[4996]: E0228 10:43:12.951144 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="extract-content" Feb 28 10:43:12 crc kubenswrapper[4996]: I0228 10:43:12.951152 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="extract-content" Feb 28 10:43:12 crc kubenswrapper[4996]: E0228 10:43:12.951168 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="extract-utilities" Feb 28 10:43:12 crc kubenswrapper[4996]: I0228 10:43:12.951176 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="extract-utilities" Feb 28 10:43:12 crc kubenswrapper[4996]: I0228 10:43:12.951401 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="267dd689-f88d-4232-ada5-1d12781607c5" containerName="registry-server" Feb 28 10:43:12 crc kubenswrapper[4996]: I0228 10:43:12.952993 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:12 crc kubenswrapper[4996]: I0228 10:43:12.972832 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfpfj"] Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.057561 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-utilities\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.057758 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-catalog-content\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.058060 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dkxf\" (UniqueName: \"kubernetes.io/projected/9f51df32-60f6-430a-a8e8-1ad4673594ce-kube-api-access-7dkxf\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.159609 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-utilities\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.159668 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-catalog-content\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.159773 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dkxf\" (UniqueName: \"kubernetes.io/projected/9f51df32-60f6-430a-a8e8-1ad4673594ce-kube-api-access-7dkxf\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.160258 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-utilities\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.160289 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-catalog-content\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.180776 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dkxf\" (UniqueName: \"kubernetes.io/projected/9f51df32-60f6-430a-a8e8-1ad4673594ce-kube-api-access-7dkxf\") pod \"redhat-marketplace-qfpfj\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.295803 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:13 crc kubenswrapper[4996]: I0228 10:43:13.768371 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfpfj"] Feb 28 10:43:14 crc kubenswrapper[4996]: I0228 10:43:14.461242 4996 generic.go:334] "Generic (PLEG): container finished" podID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerID="d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8" exitCode=0 Feb 28 10:43:14 crc kubenswrapper[4996]: I0228 10:43:14.461311 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfpfj" event={"ID":"9f51df32-60f6-430a-a8e8-1ad4673594ce","Type":"ContainerDied","Data":"d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8"} Feb 28 10:43:14 crc kubenswrapper[4996]: I0228 10:43:14.461704 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfpfj" event={"ID":"9f51df32-60f6-430a-a8e8-1ad4673594ce","Type":"ContainerStarted","Data":"8cb8264c4839dcb3916e388c8e934b68f36bd4be80eb56e74f02fc247b003d91"} Feb 28 10:43:15 crc kubenswrapper[4996]: I0228 10:43:15.477098 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfpfj" event={"ID":"9f51df32-60f6-430a-a8e8-1ad4673594ce","Type":"ContainerStarted","Data":"6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f"} Feb 28 10:43:16 crc kubenswrapper[4996]: I0228 10:43:16.488288 4996 generic.go:334] "Generic (PLEG): container finished" podID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerID="6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f" exitCode=0 Feb 28 10:43:16 crc kubenswrapper[4996]: I0228 10:43:16.488367 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfpfj" event={"ID":"9f51df32-60f6-430a-a8e8-1ad4673594ce","Type":"ContainerDied","Data":"6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f"} Feb 28 10:43:17 crc kubenswrapper[4996]: I0228 10:43:17.500636 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfpfj" event={"ID":"9f51df32-60f6-430a-a8e8-1ad4673594ce","Type":"ContainerStarted","Data":"c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c"} Feb 28 10:43:17 crc kubenswrapper[4996]: I0228 10:43:17.531142 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qfpfj" podStartSLOduration=3.032384976 podStartE2EDuration="5.531123389s" podCreationTimestamp="2026-02-28 10:43:12 +0000 UTC" firstStartedPulling="2026-02-28 10:43:14.463656474 +0000 UTC m=+6158.154459295" lastFinishedPulling="2026-02-28 10:43:16.962394857 +0000 UTC m=+6160.653197708" observedRunningTime="2026-02-28 10:43:17.526407204 +0000 UTC m=+6161.217210055" watchObservedRunningTime="2026-02-28 10:43:17.531123389 +0000 UTC m=+6161.221926200" Feb 28 10:43:23 crc kubenswrapper[4996]: I0228 10:43:23.296671 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:23 crc kubenswrapper[4996]: I0228 10:43:23.297411 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:23 crc kubenswrapper[4996]: I0228 10:43:23.356351 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:23 crc kubenswrapper[4996]: I0228 10:43:23.609414 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:23 crc kubenswrapper[4996]: I0228 10:43:23.661146 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfpfj"] Feb 28 10:43:25 crc kubenswrapper[4996]: I0228 10:43:25.578976 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qfpfj" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerName="registry-server" containerID="cri-o://c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c" gracePeriod=2 Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.139842 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.237177 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-utilities\") pod \"9f51df32-60f6-430a-a8e8-1ad4673594ce\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.237305 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dkxf\" (UniqueName: \"kubernetes.io/projected/9f51df32-60f6-430a-a8e8-1ad4673594ce-kube-api-access-7dkxf\") pod \"9f51df32-60f6-430a-a8e8-1ad4673594ce\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.237464 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-catalog-content\") pod \"9f51df32-60f6-430a-a8e8-1ad4673594ce\" (UID: \"9f51df32-60f6-430a-a8e8-1ad4673594ce\") " Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.243934 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-utilities" (OuterVolumeSpecName: "utilities") pod "9f51df32-60f6-430a-a8e8-1ad4673594ce" (UID: "9f51df32-60f6-430a-a8e8-1ad4673594ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.250508 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f51df32-60f6-430a-a8e8-1ad4673594ce-kube-api-access-7dkxf" (OuterVolumeSpecName: "kube-api-access-7dkxf") pod "9f51df32-60f6-430a-a8e8-1ad4673594ce" (UID: "9f51df32-60f6-430a-a8e8-1ad4673594ce"). InnerVolumeSpecName "kube-api-access-7dkxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.274794 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f51df32-60f6-430a-a8e8-1ad4673594ce" (UID: "9f51df32-60f6-430a-a8e8-1ad4673594ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.339498 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.339535 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dkxf\" (UniqueName: \"kubernetes.io/projected/9f51df32-60f6-430a-a8e8-1ad4673594ce-kube-api-access-7dkxf\") on node \"crc\" DevicePath \"\"" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.339544 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f51df32-60f6-430a-a8e8-1ad4673594ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.593706 4996 generic.go:334] "Generic (PLEG): container finished" podID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerID="c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c" exitCode=0 Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.593744 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfpfj" event={"ID":"9f51df32-60f6-430a-a8e8-1ad4673594ce","Type":"ContainerDied","Data":"c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c"} Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.594089 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qfpfj" event={"ID":"9f51df32-60f6-430a-a8e8-1ad4673594ce","Type":"ContainerDied","Data":"8cb8264c4839dcb3916e388c8e934b68f36bd4be80eb56e74f02fc247b003d91"} Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.593789 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qfpfj" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.594109 4996 scope.go:117] "RemoveContainer" containerID="c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.615373 4996 scope.go:117] "RemoveContainer" containerID="6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.653237 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfpfj"] Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.654962 4996 scope.go:117] "RemoveContainer" containerID="d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.669037 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qfpfj"] Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.693625 4996 scope.go:117] "RemoveContainer" containerID="c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c" Feb 28 10:43:26 crc kubenswrapper[4996]: E0228 10:43:26.694218 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c\": container with ID starting with c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c not found: ID does not exist" containerID="c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.694268 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c"} err="failed to get container status \"c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c\": rpc error: code = NotFound desc = could not find container \"c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c\": container with ID starting with c99627631bfd5413cde167a19d070b9dac52bffb3e51a95282e25d904fb2661c not found: ID does not exist" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.694299 4996 scope.go:117] "RemoveContainer" containerID="6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f" Feb 28 10:43:26 crc kubenswrapper[4996]: E0228 10:43:26.694591 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f\": container with ID starting with 6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f not found: ID does not exist" containerID="6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.694618 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f"} err="failed to get container status \"6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f\": rpc error: code = NotFound desc = could not find container \"6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f\": container with ID starting with 6e29f6b5746a16429744e6274d561cd947663e06f13ac4b8a0f7a7da5cfc540f not found: ID does not exist" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.694635 4996 scope.go:117] "RemoveContainer" containerID="d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8" Feb 28 10:43:26 crc kubenswrapper[4996]: E0228 10:43:26.694875 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8\": container with ID starting with d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8 not found: ID does not exist" containerID="d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8" Feb 28 10:43:26 crc kubenswrapper[4996]: I0228 10:43:26.694900 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8"} err="failed to get container status \"d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8\": rpc error: code = NotFound desc = could not find container \"d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8\": container with ID starting with d0eb0246600dc70bb5edb95064f309c984ad55a1e6b8d239a703f252304b75a8 not found: ID does not exist" Feb 28 10:43:27 crc kubenswrapper[4996]: I0228 10:43:27.044838 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" path="/var/lib/kubelet/pods/9f51df32-60f6-430a-a8e8-1ad4673594ce/volumes" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.208807 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537924-8dp2k"] Feb 28 10:44:00 crc kubenswrapper[4996]: E0228 10:44:00.209895 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerName="extract-content" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.209915 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerName="extract-content" Feb 28 10:44:00 crc kubenswrapper[4996]: E0228 10:44:00.209936 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerName="extract-utilities" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.209945 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerName="extract-utilities" Feb 28 10:44:00 crc kubenswrapper[4996]: E0228 10:44:00.209980 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerName="registry-server" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.209989 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerName="registry-server" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.210239 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f51df32-60f6-430a-a8e8-1ad4673594ce" containerName="registry-server" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.210958 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537924-8dp2k" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.214157 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.214227 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.216035 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.258037 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537924-8dp2k"] Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.273363 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbf42\" (UniqueName: \"kubernetes.io/projected/e68bf7a0-d982-45d6-91d0-fbd958919469-kube-api-access-pbf42\") pod \"auto-csr-approver-29537924-8dp2k\" (UID: \"e68bf7a0-d982-45d6-91d0-fbd958919469\") " pod="openshift-infra/auto-csr-approver-29537924-8dp2k" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.374940 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbf42\" (UniqueName: \"kubernetes.io/projected/e68bf7a0-d982-45d6-91d0-fbd958919469-kube-api-access-pbf42\") pod \"auto-csr-approver-29537924-8dp2k\" (UID: \"e68bf7a0-d982-45d6-91d0-fbd958919469\") " pod="openshift-infra/auto-csr-approver-29537924-8dp2k" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.394634 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbf42\" (UniqueName: \"kubernetes.io/projected/e68bf7a0-d982-45d6-91d0-fbd958919469-kube-api-access-pbf42\") pod \"auto-csr-approver-29537924-8dp2k\" (UID: \"e68bf7a0-d982-45d6-91d0-fbd958919469\") " pod="openshift-infra/auto-csr-approver-29537924-8dp2k" Feb 28 10:44:00 crc kubenswrapper[4996]: I0228 10:44:00.564367 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537924-8dp2k" Feb 28 10:44:01 crc kubenswrapper[4996]: I0228 10:44:01.054124 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537924-8dp2k"] Feb 28 10:44:01 crc kubenswrapper[4996]: I0228 10:44:01.910620 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537924-8dp2k" event={"ID":"e68bf7a0-d982-45d6-91d0-fbd958919469","Type":"ContainerStarted","Data":"5f489bd1c3f6aa8e935f10386b369643ab4075232d99ede7508266391c961ba1"} Feb 28 10:44:02 crc kubenswrapper[4996]: I0228 10:44:02.923213 4996 generic.go:334] "Generic (PLEG): container finished" podID="e68bf7a0-d982-45d6-91d0-fbd958919469" containerID="51c6535143120e017677c5601d08b45de0da0f8b40c843ed572fcd539fae707f" exitCode=0 Feb 28 10:44:02 crc kubenswrapper[4996]: I0228 10:44:02.923286 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537924-8dp2k" event={"ID":"e68bf7a0-d982-45d6-91d0-fbd958919469","Type":"ContainerDied","Data":"51c6535143120e017677c5601d08b45de0da0f8b40c843ed572fcd539fae707f"} Feb 28 10:44:04 crc kubenswrapper[4996]: I0228 10:44:04.328434 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537924-8dp2k" Feb 28 10:44:04 crc kubenswrapper[4996]: I0228 10:44:04.460859 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbf42\" (UniqueName: \"kubernetes.io/projected/e68bf7a0-d982-45d6-91d0-fbd958919469-kube-api-access-pbf42\") pod \"e68bf7a0-d982-45d6-91d0-fbd958919469\" (UID: \"e68bf7a0-d982-45d6-91d0-fbd958919469\") " Feb 28 10:44:04 crc kubenswrapper[4996]: I0228 10:44:04.467041 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68bf7a0-d982-45d6-91d0-fbd958919469-kube-api-access-pbf42" (OuterVolumeSpecName: "kube-api-access-pbf42") pod "e68bf7a0-d982-45d6-91d0-fbd958919469" (UID: "e68bf7a0-d982-45d6-91d0-fbd958919469"). InnerVolumeSpecName "kube-api-access-pbf42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:44:04 crc kubenswrapper[4996]: I0228 10:44:04.563572 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbf42\" (UniqueName: \"kubernetes.io/projected/e68bf7a0-d982-45d6-91d0-fbd958919469-kube-api-access-pbf42\") on node \"crc\" DevicePath \"\"" Feb 28 10:44:04 crc kubenswrapper[4996]: I0228 10:44:04.954718 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537924-8dp2k" event={"ID":"e68bf7a0-d982-45d6-91d0-fbd958919469","Type":"ContainerDied","Data":"5f489bd1c3f6aa8e935f10386b369643ab4075232d99ede7508266391c961ba1"} Feb 28 10:44:04 crc kubenswrapper[4996]: I0228 10:44:04.954778 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f489bd1c3f6aa8e935f10386b369643ab4075232d99ede7508266391c961ba1" Feb 28 10:44:04 crc kubenswrapper[4996]: I0228 10:44:04.954876 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537924-8dp2k" Feb 28 10:44:05 crc kubenswrapper[4996]: I0228 10:44:05.416768 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537918-6xgl5"] Feb 28 10:44:05 crc kubenswrapper[4996]: I0228 10:44:05.429362 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537918-6xgl5"] Feb 28 10:44:07 crc kubenswrapper[4996]: I0228 10:44:07.047693 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cee31a-8ec4-4892-9beb-305b1b5a3a6a" path="/var/lib/kubelet/pods/32cee31a-8ec4-4892-9beb-305b1b5a3a6a/volumes" Feb 28 10:44:42 crc kubenswrapper[4996]: I0228 10:44:42.249521 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:44:42 crc kubenswrapper[4996]: I0228 10:44:42.250597 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:44:49 crc kubenswrapper[4996]: I0228 10:44:49.699159 4996 scope.go:117] "RemoveContainer" containerID="2443b599887a0f9300a5a528611ad092e47c1fec47dbb05fba1d3f912e8162e2" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.144531 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv"] Feb 28 10:45:00 crc kubenswrapper[4996]: E0228 10:45:00.145787 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68bf7a0-d982-45d6-91d0-fbd958919469" containerName="oc" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.145804 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68bf7a0-d982-45d6-91d0-fbd958919469" containerName="oc" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.146021 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68bf7a0-d982-45d6-91d0-fbd958919469" containerName="oc" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.146752 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.149877 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.149902 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.156359 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv"] Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.243391 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7964d8-f0a8-441f-ba66-c352972e7c70-config-volume\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.243733 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b7964d8-f0a8-441f-ba66-c352972e7c70-secret-volume\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.243762 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djxfg\" (UniqueName: \"kubernetes.io/projected/5b7964d8-f0a8-441f-ba66-c352972e7c70-kube-api-access-djxfg\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.345990 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b7964d8-f0a8-441f-ba66-c352972e7c70-secret-volume\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.346079 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djxfg\" (UniqueName: \"kubernetes.io/projected/5b7964d8-f0a8-441f-ba66-c352972e7c70-kube-api-access-djxfg\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.346237 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7964d8-f0a8-441f-ba66-c352972e7c70-config-volume\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.347043 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7964d8-f0a8-441f-ba66-c352972e7c70-config-volume\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.352570 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b7964d8-f0a8-441f-ba66-c352972e7c70-secret-volume\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.363167 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djxfg\" (UniqueName: \"kubernetes.io/projected/5b7964d8-f0a8-441f-ba66-c352972e7c70-kube-api-access-djxfg\") pod \"collect-profiles-29537925-2mbfv\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:00 crc kubenswrapper[4996]: I0228 10:45:00.468358 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:01 crc kubenswrapper[4996]: I0228 10:45:01.069115 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv"] Feb 28 10:45:01 crc kubenswrapper[4996]: I0228 10:45:01.468432 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" event={"ID":"5b7964d8-f0a8-441f-ba66-c352972e7c70","Type":"ContainerStarted","Data":"7ca732c19e4d7eccd362345d6c30413e9f0aad70320427a944e9828aca145bd8"} Feb 28 10:45:01 crc kubenswrapper[4996]: I0228 10:45:01.468738 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" event={"ID":"5b7964d8-f0a8-441f-ba66-c352972e7c70","Type":"ContainerStarted","Data":"3b56257a8f47cd8f4ffe6ca584e1014a19b1d9dac6700158a4beff39a19ac836"} Feb 28 10:45:01 crc kubenswrapper[4996]: I0228 10:45:01.492500 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" podStartSLOduration=1.492481528 podStartE2EDuration="1.492481528s" podCreationTimestamp="2026-02-28 10:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 10:45:01.483585198 +0000 UTC m=+6265.174388019" watchObservedRunningTime="2026-02-28 10:45:01.492481528 +0000 UTC m=+6265.183284349" Feb 28 10:45:02 crc kubenswrapper[4996]: I0228 10:45:02.479066 4996 generic.go:334] "Generic (PLEG): container finished" podID="5b7964d8-f0a8-441f-ba66-c352972e7c70" containerID="7ca732c19e4d7eccd362345d6c30413e9f0aad70320427a944e9828aca145bd8" exitCode=0 Feb 28 10:45:02 crc kubenswrapper[4996]: I0228 10:45:02.479178 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" event={"ID":"5b7964d8-f0a8-441f-ba66-c352972e7c70","Type":"ContainerDied","Data":"7ca732c19e4d7eccd362345d6c30413e9f0aad70320427a944e9828aca145bd8"} Feb 28 10:45:03 crc kubenswrapper[4996]: I0228 10:45:03.875078 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.016567 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7964d8-f0a8-441f-ba66-c352972e7c70-config-volume\") pod \"5b7964d8-f0a8-441f-ba66-c352972e7c70\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.017100 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b7964d8-f0a8-441f-ba66-c352972e7c70-secret-volume\") pod \"5b7964d8-f0a8-441f-ba66-c352972e7c70\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.017166 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djxfg\" (UniqueName: \"kubernetes.io/projected/5b7964d8-f0a8-441f-ba66-c352972e7c70-kube-api-access-djxfg\") pod \"5b7964d8-f0a8-441f-ba66-c352972e7c70\" (UID: \"5b7964d8-f0a8-441f-ba66-c352972e7c70\") " Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.017466 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b7964d8-f0a8-441f-ba66-c352972e7c70-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b7964d8-f0a8-441f-ba66-c352972e7c70" (UID: "5b7964d8-f0a8-441f-ba66-c352972e7c70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.017667 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7964d8-f0a8-441f-ba66-c352972e7c70-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.024413 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b7964d8-f0a8-441f-ba66-c352972e7c70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b7964d8-f0a8-441f-ba66-c352972e7c70" (UID: "5b7964d8-f0a8-441f-ba66-c352972e7c70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.026335 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b7964d8-f0a8-441f-ba66-c352972e7c70-kube-api-access-djxfg" (OuterVolumeSpecName: "kube-api-access-djxfg") pod "5b7964d8-f0a8-441f-ba66-c352972e7c70" (UID: "5b7964d8-f0a8-441f-ba66-c352972e7c70"). InnerVolumeSpecName "kube-api-access-djxfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.119061 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b7964d8-f0a8-441f-ba66-c352972e7c70-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.119102 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djxfg\" (UniqueName: \"kubernetes.io/projected/5b7964d8-f0a8-441f-ba66-c352972e7c70-kube-api-access-djxfg\") on node \"crc\" DevicePath \"\"" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.500939 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" event={"ID":"5b7964d8-f0a8-441f-ba66-c352972e7c70","Type":"ContainerDied","Data":"3b56257a8f47cd8f4ffe6ca584e1014a19b1d9dac6700158a4beff39a19ac836"} Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.500979 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537925-2mbfv" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.500988 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b56257a8f47cd8f4ffe6ca584e1014a19b1d9dac6700158a4beff39a19ac836" Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.564401 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv"] Feb 28 10:45:04 crc kubenswrapper[4996]: I0228 10:45:04.572681 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537880-8jcgv"] Feb 28 10:45:05 crc kubenswrapper[4996]: I0228 10:45:05.051985 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c77675dd-10db-46e8-9236-10af0a0d602a" path="/var/lib/kubelet/pods/c77675dd-10db-46e8-9236-10af0a0d602a/volumes" Feb 28 10:45:12 crc kubenswrapper[4996]: I0228 10:45:12.248712 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:45:12 crc kubenswrapper[4996]: I0228 10:45:12.249234 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.248493 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.249244 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.249311 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.250388 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cba82d5d8766174a11993753af3ff89969fd09057c88e8afbd93a1432add2db1"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.250474 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://cba82d5d8766174a11993753af3ff89969fd09057c88e8afbd93a1432add2db1" gracePeriod=600 Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.818935 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="cba82d5d8766174a11993753af3ff89969fd09057c88e8afbd93a1432add2db1" exitCode=0 Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.818976 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"cba82d5d8766174a11993753af3ff89969fd09057c88e8afbd93a1432add2db1"} Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.819314 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7"} Feb 28 10:45:42 crc kubenswrapper[4996]: I0228 10:45:42.819338 4996 scope.go:117] "RemoveContainer" containerID="2613509dc54495ad3daef4b628b0ced3a28a2f93df159e1c8cb5981480360e5b" Feb 28 10:45:49 crc kubenswrapper[4996]: I0228 10:45:49.767771 4996 scope.go:117] "RemoveContainer" containerID="7afa7dd2c546e7a60459a7a4465fe693ce104655089d7ffae43c4824fa6307f2" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.154045 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537926-btmrz"] Feb 28 10:46:00 crc kubenswrapper[4996]: E0228 10:46:00.154883 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b7964d8-f0a8-441f-ba66-c352972e7c70" containerName="collect-profiles" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.154899 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b7964d8-f0a8-441f-ba66-c352972e7c70" containerName="collect-profiles" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.155150 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b7964d8-f0a8-441f-ba66-c352972e7c70" containerName="collect-profiles" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.155886 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537926-btmrz" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.159020 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.160914 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.161782 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.166702 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537926-btmrz"] Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.315940 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g979c\" (UniqueName: \"kubernetes.io/projected/c0e47966-231a-4b95-8185-2fef02afdc8e-kube-api-access-g979c\") pod \"auto-csr-approver-29537926-btmrz\" (UID: \"c0e47966-231a-4b95-8185-2fef02afdc8e\") " pod="openshift-infra/auto-csr-approver-29537926-btmrz" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.417992 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g979c\" (UniqueName: \"kubernetes.io/projected/c0e47966-231a-4b95-8185-2fef02afdc8e-kube-api-access-g979c\") pod \"auto-csr-approver-29537926-btmrz\" (UID: \"c0e47966-231a-4b95-8185-2fef02afdc8e\") " pod="openshift-infra/auto-csr-approver-29537926-btmrz" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.452467 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g979c\" (UniqueName: \"kubernetes.io/projected/c0e47966-231a-4b95-8185-2fef02afdc8e-kube-api-access-g979c\") pod \"auto-csr-approver-29537926-btmrz\" (UID: \"c0e47966-231a-4b95-8185-2fef02afdc8e\") " pod="openshift-infra/auto-csr-approver-29537926-btmrz" Feb 28 10:46:00 crc kubenswrapper[4996]: I0228 10:46:00.697120 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537926-btmrz" Feb 28 10:46:01 crc kubenswrapper[4996]: I0228 10:46:01.189363 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537926-btmrz"] Feb 28 10:46:02 crc kubenswrapper[4996]: I0228 10:46:02.004743 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537926-btmrz" event={"ID":"c0e47966-231a-4b95-8185-2fef02afdc8e","Type":"ContainerStarted","Data":"6ed44e51a5d96cfdaec3012edf77d084add825d48c59522aec216e3d61de93b8"} Feb 28 10:46:03 crc kubenswrapper[4996]: I0228 10:46:03.020524 4996 generic.go:334] "Generic (PLEG): container finished" podID="c0e47966-231a-4b95-8185-2fef02afdc8e" containerID="7a137ced98c34e0d0a6b21e3485e8c3f74338a4150eff22de2a1cf8446da921b" exitCode=0 Feb 28 10:46:03 crc kubenswrapper[4996]: I0228 10:46:03.020574 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537926-btmrz" event={"ID":"c0e47966-231a-4b95-8185-2fef02afdc8e","Type":"ContainerDied","Data":"7a137ced98c34e0d0a6b21e3485e8c3f74338a4150eff22de2a1cf8446da921b"} Feb 28 10:46:04 crc kubenswrapper[4996]: I0228 10:46:04.413182 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537926-btmrz" Feb 28 10:46:04 crc kubenswrapper[4996]: I0228 10:46:04.576265 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g979c\" (UniqueName: \"kubernetes.io/projected/c0e47966-231a-4b95-8185-2fef02afdc8e-kube-api-access-g979c\") pod \"c0e47966-231a-4b95-8185-2fef02afdc8e\" (UID: \"c0e47966-231a-4b95-8185-2fef02afdc8e\") " Feb 28 10:46:04 crc kubenswrapper[4996]: I0228 10:46:04.581518 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0e47966-231a-4b95-8185-2fef02afdc8e-kube-api-access-g979c" (OuterVolumeSpecName: "kube-api-access-g979c") pod "c0e47966-231a-4b95-8185-2fef02afdc8e" (UID: "c0e47966-231a-4b95-8185-2fef02afdc8e"). InnerVolumeSpecName "kube-api-access-g979c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:46:04 crc kubenswrapper[4996]: I0228 10:46:04.679116 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g979c\" (UniqueName: \"kubernetes.io/projected/c0e47966-231a-4b95-8185-2fef02afdc8e-kube-api-access-g979c\") on node \"crc\" DevicePath \"\"" Feb 28 10:46:05 crc kubenswrapper[4996]: I0228 10:46:05.040487 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537926-btmrz" Feb 28 10:46:05 crc kubenswrapper[4996]: I0228 10:46:05.056854 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537926-btmrz" event={"ID":"c0e47966-231a-4b95-8185-2fef02afdc8e","Type":"ContainerDied","Data":"6ed44e51a5d96cfdaec3012edf77d084add825d48c59522aec216e3d61de93b8"} Feb 28 10:46:05 crc kubenswrapper[4996]: I0228 10:46:05.056897 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed44e51a5d96cfdaec3012edf77d084add825d48c59522aec216e3d61de93b8" Feb 28 10:46:05 crc kubenswrapper[4996]: I0228 10:46:05.499323 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537920-jmkj2"] Feb 28 10:46:05 crc kubenswrapper[4996]: I0228 10:46:05.507108 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537920-jmkj2"] Feb 28 10:46:07 crc kubenswrapper[4996]: I0228 10:46:07.044721 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d689d7e6-a404-46b2-bf90-3910fd8f7055" path="/var/lib/kubelet/pods/d689d7e6-a404-46b2-bf90-3910fd8f7055/volumes" Feb 28 10:46:49 crc kubenswrapper[4996]: I0228 10:46:49.821649 4996 scope.go:117] "RemoveContainer" containerID="e90ca9817e514c08a60e57a306e777249f4c3438a7967cf9455e4bb9edf17b64" Feb 28 10:47:42 crc kubenswrapper[4996]: I0228 10:47:42.249086 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:47:42 crc kubenswrapper[4996]: I0228 10:47:42.249620 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.155454 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537928-hn97g"] Feb 28 10:48:00 crc kubenswrapper[4996]: E0228 10:48:00.156403 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0e47966-231a-4b95-8185-2fef02afdc8e" containerName="oc" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.156417 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0e47966-231a-4b95-8185-2fef02afdc8e" containerName="oc" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.156619 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0e47966-231a-4b95-8185-2fef02afdc8e" containerName="oc" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.157265 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537928-hn97g" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.160058 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.160479 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.160556 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.185556 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537928-hn97g"] Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.204846 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8plf\" (UniqueName: \"kubernetes.io/projected/67f0d826-2096-4b46-bac4-752699d121e0-kube-api-access-z8plf\") pod \"auto-csr-approver-29537928-hn97g\" (UID: \"67f0d826-2096-4b46-bac4-752699d121e0\") " pod="openshift-infra/auto-csr-approver-29537928-hn97g" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.307329 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8plf\" (UniqueName: \"kubernetes.io/projected/67f0d826-2096-4b46-bac4-752699d121e0-kube-api-access-z8plf\") pod \"auto-csr-approver-29537928-hn97g\" (UID: \"67f0d826-2096-4b46-bac4-752699d121e0\") " pod="openshift-infra/auto-csr-approver-29537928-hn97g" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.336085 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8plf\" (UniqueName: \"kubernetes.io/projected/67f0d826-2096-4b46-bac4-752699d121e0-kube-api-access-z8plf\") pod \"auto-csr-approver-29537928-hn97g\" (UID: \"67f0d826-2096-4b46-bac4-752699d121e0\") " pod="openshift-infra/auto-csr-approver-29537928-hn97g" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.484552 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537928-hn97g" Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.936302 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537928-hn97g"] Feb 28 10:48:00 crc kubenswrapper[4996]: I0228 10:48:00.944086 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:48:01 crc kubenswrapper[4996]: I0228 10:48:01.043272 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537928-hn97g" event={"ID":"67f0d826-2096-4b46-bac4-752699d121e0","Type":"ContainerStarted","Data":"1610a1a6726e5f74d1ff2bffdfe8fb94481032c7803976a4e658acafab4e4b2d"} Feb 28 10:48:03 crc kubenswrapper[4996]: I0228 10:48:03.054911 4996 generic.go:334] "Generic (PLEG): container finished" podID="67f0d826-2096-4b46-bac4-752699d121e0" containerID="93c3eb428e768cdcfcbb6f50dfa6e43a9d0fed82274f94866c2d01b76f034d2b" exitCode=0 Feb 28 10:48:03 crc kubenswrapper[4996]: I0228 10:48:03.055346 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537928-hn97g" event={"ID":"67f0d826-2096-4b46-bac4-752699d121e0","Type":"ContainerDied","Data":"93c3eb428e768cdcfcbb6f50dfa6e43a9d0fed82274f94866c2d01b76f034d2b"} Feb 28 10:48:04 crc kubenswrapper[4996]: I0228 10:48:04.484146 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537928-hn97g" Feb 28 10:48:04 crc kubenswrapper[4996]: I0228 10:48:04.594956 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8plf\" (UniqueName: \"kubernetes.io/projected/67f0d826-2096-4b46-bac4-752699d121e0-kube-api-access-z8plf\") pod \"67f0d826-2096-4b46-bac4-752699d121e0\" (UID: \"67f0d826-2096-4b46-bac4-752699d121e0\") " Feb 28 10:48:04 crc kubenswrapper[4996]: I0228 10:48:04.603400 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f0d826-2096-4b46-bac4-752699d121e0-kube-api-access-z8plf" (OuterVolumeSpecName: "kube-api-access-z8plf") pod "67f0d826-2096-4b46-bac4-752699d121e0" (UID: "67f0d826-2096-4b46-bac4-752699d121e0"). InnerVolumeSpecName "kube-api-access-z8plf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:48:04 crc kubenswrapper[4996]: I0228 10:48:04.697146 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8plf\" (UniqueName: \"kubernetes.io/projected/67f0d826-2096-4b46-bac4-752699d121e0-kube-api-access-z8plf\") on node \"crc\" DevicePath \"\"" Feb 28 10:48:05 crc kubenswrapper[4996]: I0228 10:48:05.077545 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537928-hn97g" event={"ID":"67f0d826-2096-4b46-bac4-752699d121e0","Type":"ContainerDied","Data":"1610a1a6726e5f74d1ff2bffdfe8fb94481032c7803976a4e658acafab4e4b2d"} Feb 28 10:48:05 crc kubenswrapper[4996]: I0228 10:48:05.077604 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1610a1a6726e5f74d1ff2bffdfe8fb94481032c7803976a4e658acafab4e4b2d" Feb 28 10:48:05 crc kubenswrapper[4996]: I0228 10:48:05.077676 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537928-hn97g" Feb 28 10:48:05 crc kubenswrapper[4996]: I0228 10:48:05.567321 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537922-dv499"] Feb 28 10:48:05 crc kubenswrapper[4996]: I0228 10:48:05.575315 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537922-dv499"] Feb 28 10:48:07 crc kubenswrapper[4996]: I0228 10:48:07.054976 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ab8a24-e23d-4ef1-9b9b-436c44e05134" path="/var/lib/kubelet/pods/49ab8a24-e23d-4ef1-9b9b-436c44e05134/volumes" Feb 28 10:48:12 crc kubenswrapper[4996]: I0228 10:48:12.249532 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:48:12 crc kubenswrapper[4996]: I0228 10:48:12.250453 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.249254 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.249741 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.249796 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.250582 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.250641 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" gracePeriod=600 Feb 28 10:48:42 crc kubenswrapper[4996]: E0228 10:48:42.370673 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.433108 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" exitCode=0 Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.433169 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7"} Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.433202 4996 scope.go:117] "RemoveContainer" containerID="cba82d5d8766174a11993753af3ff89969fd09057c88e8afbd93a1432add2db1" Feb 28 10:48:42 crc kubenswrapper[4996]: I0228 10:48:42.433798 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:48:42 crc kubenswrapper[4996]: E0228 10:48:42.434037 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:48:49 crc kubenswrapper[4996]: I0228 10:48:49.908666 4996 scope.go:117] "RemoveContainer" containerID="d278737358f397e629b48ea65364216feb8b992153197417aa28368f5ff35623" Feb 28 10:48:57 crc kubenswrapper[4996]: I0228 10:48:57.046241 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:48:57 crc kubenswrapper[4996]: E0228 10:48:57.047452 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:49:11 crc kubenswrapper[4996]: I0228 10:49:11.033765 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:49:11 crc kubenswrapper[4996]: E0228 10:49:11.034500 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.603740 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jpxq2"] Feb 28 10:49:15 crc kubenswrapper[4996]: E0228 10:49:15.604701 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f0d826-2096-4b46-bac4-752699d121e0" containerName="oc" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.604714 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f0d826-2096-4b46-bac4-752699d121e0" containerName="oc" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.604877 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f0d826-2096-4b46-bac4-752699d121e0" containerName="oc" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.606416 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.618261 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jpxq2"] Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.682601 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-utilities\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.682973 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps8rl\" (UniqueName: \"kubernetes.io/projected/3042c469-edf3-4435-926d-5c24d1307212-kube-api-access-ps8rl\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.683058 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-catalog-content\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.785275 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-utilities\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.785834 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps8rl\" (UniqueName: \"kubernetes.io/projected/3042c469-edf3-4435-926d-5c24d1307212-kube-api-access-ps8rl\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.785763 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-utilities\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.786241 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-catalog-content\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.786275 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-catalog-content\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.805406 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps8rl\" (UniqueName: \"kubernetes.io/projected/3042c469-edf3-4435-926d-5c24d1307212-kube-api-access-ps8rl\") pod \"certified-operators-jpxq2\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:15 crc kubenswrapper[4996]: I0228 10:49:15.938218 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:16 crc kubenswrapper[4996]: I0228 10:49:16.512024 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jpxq2"] Feb 28 10:49:16 crc kubenswrapper[4996]: I0228 10:49:16.836532 4996 generic.go:334] "Generic (PLEG): container finished" podID="3042c469-edf3-4435-926d-5c24d1307212" containerID="21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed" exitCode=0 Feb 28 10:49:16 crc kubenswrapper[4996]: I0228 10:49:16.836662 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxq2" event={"ID":"3042c469-edf3-4435-926d-5c24d1307212","Type":"ContainerDied","Data":"21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed"} Feb 28 10:49:16 crc kubenswrapper[4996]: I0228 10:49:16.836901 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxq2" event={"ID":"3042c469-edf3-4435-926d-5c24d1307212","Type":"ContainerStarted","Data":"1690f723f93d900d3d7057657f3f86c7368c716fd0a2db337be212b9c62a50aa"} Feb 28 10:49:17 crc kubenswrapper[4996]: I0228 10:49:17.845069 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxq2" event={"ID":"3042c469-edf3-4435-926d-5c24d1307212","Type":"ContainerStarted","Data":"dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646"} Feb 28 10:49:19 crc kubenswrapper[4996]: I0228 10:49:19.864160 4996 generic.go:334] "Generic (PLEG): container finished" podID="3042c469-edf3-4435-926d-5c24d1307212" containerID="dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646" exitCode=0 Feb 28 10:49:19 crc kubenswrapper[4996]: I0228 10:49:19.864259 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxq2" event={"ID":"3042c469-edf3-4435-926d-5c24d1307212","Type":"ContainerDied","Data":"dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646"} Feb 28 10:49:20 crc kubenswrapper[4996]: I0228 10:49:20.874591 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxq2" event={"ID":"3042c469-edf3-4435-926d-5c24d1307212","Type":"ContainerStarted","Data":"8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354"} Feb 28 10:49:20 crc kubenswrapper[4996]: I0228 10:49:20.898627 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jpxq2" podStartSLOduration=2.430818923 podStartE2EDuration="5.898603317s" podCreationTimestamp="2026-02-28 10:49:15 +0000 UTC" firstStartedPulling="2026-02-28 10:49:16.838833126 +0000 UTC m=+6520.529635937" lastFinishedPulling="2026-02-28 10:49:20.30661751 +0000 UTC m=+6523.997420331" observedRunningTime="2026-02-28 10:49:20.8901527 +0000 UTC m=+6524.580955511" watchObservedRunningTime="2026-02-28 10:49:20.898603317 +0000 UTC m=+6524.589406128" Feb 28 10:49:22 crc kubenswrapper[4996]: I0228 10:49:22.033861 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:49:22 crc kubenswrapper[4996]: E0228 10:49:22.034689 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:49:25 crc kubenswrapper[4996]: I0228 10:49:25.939171 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:25 crc kubenswrapper[4996]: I0228 10:49:25.940942 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:26 crc kubenswrapper[4996]: I0228 10:49:26.004778 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:26 crc kubenswrapper[4996]: I0228 10:49:26.966384 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:27 crc kubenswrapper[4996]: I0228 10:49:27.008828 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jpxq2"] Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.014873 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jpxq2" podUID="3042c469-edf3-4435-926d-5c24d1307212" containerName="registry-server" containerID="cri-o://8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354" gracePeriod=2 Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.600123 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.716696 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps8rl\" (UniqueName: \"kubernetes.io/projected/3042c469-edf3-4435-926d-5c24d1307212-kube-api-access-ps8rl\") pod \"3042c469-edf3-4435-926d-5c24d1307212\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.718306 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-utilities\") pod \"3042c469-edf3-4435-926d-5c24d1307212\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.718498 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-catalog-content\") pod \"3042c469-edf3-4435-926d-5c24d1307212\" (UID: \"3042c469-edf3-4435-926d-5c24d1307212\") " Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.719542 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-utilities" (OuterVolumeSpecName: "utilities") pod "3042c469-edf3-4435-926d-5c24d1307212" (UID: "3042c469-edf3-4435-926d-5c24d1307212"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.723493 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3042c469-edf3-4435-926d-5c24d1307212-kube-api-access-ps8rl" (OuterVolumeSpecName: "kube-api-access-ps8rl") pod "3042c469-edf3-4435-926d-5c24d1307212" (UID: "3042c469-edf3-4435-926d-5c24d1307212"). InnerVolumeSpecName "kube-api-access-ps8rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.795757 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3042c469-edf3-4435-926d-5c24d1307212" (UID: "3042c469-edf3-4435-926d-5c24d1307212"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.821726 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.821761 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3042c469-edf3-4435-926d-5c24d1307212-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:49:29 crc kubenswrapper[4996]: I0228 10:49:29.821773 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps8rl\" (UniqueName: \"kubernetes.io/projected/3042c469-edf3-4435-926d-5c24d1307212-kube-api-access-ps8rl\") on node \"crc\" DevicePath \"\"" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.027596 4996 generic.go:334] "Generic (PLEG): container finished" podID="3042c469-edf3-4435-926d-5c24d1307212" containerID="8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354" exitCode=0 Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.028346 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxq2" event={"ID":"3042c469-edf3-4435-926d-5c24d1307212","Type":"ContainerDied","Data":"8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354"} Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.028505 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jpxq2" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.028609 4996 scope.go:117] "RemoveContainer" containerID="8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.029263 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jpxq2" event={"ID":"3042c469-edf3-4435-926d-5c24d1307212","Type":"ContainerDied","Data":"1690f723f93d900d3d7057657f3f86c7368c716fd0a2db337be212b9c62a50aa"} Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.049087 4996 scope.go:117] "RemoveContainer" containerID="dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.070167 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jpxq2"] Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.077647 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jpxq2"] Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.081531 4996 scope.go:117] "RemoveContainer" containerID="21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.118690 4996 scope.go:117] "RemoveContainer" containerID="8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354" Feb 28 10:49:30 crc kubenswrapper[4996]: E0228 10:49:30.119303 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354\": container with ID starting with 8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354 not found: ID does not exist" containerID="8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.119370 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354"} err="failed to get container status \"8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354\": rpc error: code = NotFound desc = could not find container \"8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354\": container with ID starting with 8da7b9d42b2f70452f14e7866c8ab5822c86bd56dafe2039e0934028c4306354 not found: ID does not exist" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.119418 4996 scope.go:117] "RemoveContainer" containerID="dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646" Feb 28 10:49:30 crc kubenswrapper[4996]: E0228 10:49:30.119834 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646\": container with ID starting with dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646 not found: ID does not exist" containerID="dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.119885 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646"} err="failed to get container status \"dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646\": rpc error: code = NotFound desc = could not find container \"dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646\": container with ID starting with dd5e81893714a575afa295ef3fbf3eff1c8caf66de23761de4533325103bd646 not found: ID does not exist" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.119926 4996 scope.go:117] "RemoveContainer" containerID="21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed" Feb 28 10:49:30 crc kubenswrapper[4996]: E0228 10:49:30.120274 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed\": container with ID starting with 21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed not found: ID does not exist" containerID="21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed" Feb 28 10:49:30 crc kubenswrapper[4996]: I0228 10:49:30.120305 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed"} err="failed to get container status \"21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed\": rpc error: code = NotFound desc = could not find container \"21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed\": container with ID starting with 21d8d6bb30af8c821048c7eed8a7484d003c3294a38ed2271d6af5e5a639b8ed not found: ID does not exist" Feb 28 10:49:31 crc kubenswrapper[4996]: I0228 10:49:31.042323 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3042c469-edf3-4435-926d-5c24d1307212" path="/var/lib/kubelet/pods/3042c469-edf3-4435-926d-5c24d1307212/volumes" Feb 28 10:49:33 crc kubenswrapper[4996]: I0228 10:49:33.032852 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:49:33 crc kubenswrapper[4996]: E0228 10:49:33.033469 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:49:48 crc kubenswrapper[4996]: I0228 10:49:48.033321 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:49:48 crc kubenswrapper[4996]: E0228 10:49:48.033969 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.033275 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:50:00 crc kubenswrapper[4996]: E0228 10:50:00.033959 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.145311 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537930-4m2px"] Feb 28 10:50:00 crc kubenswrapper[4996]: E0228 10:50:00.145802 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3042c469-edf3-4435-926d-5c24d1307212" containerName="extract-utilities" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.145823 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3042c469-edf3-4435-926d-5c24d1307212" containerName="extract-utilities" Feb 28 10:50:00 crc kubenswrapper[4996]: E0228 10:50:00.145851 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3042c469-edf3-4435-926d-5c24d1307212" containerName="registry-server" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.145863 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3042c469-edf3-4435-926d-5c24d1307212" containerName="registry-server" Feb 28 10:50:00 crc kubenswrapper[4996]: E0228 10:50:00.145898 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3042c469-edf3-4435-926d-5c24d1307212" containerName="extract-content" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.145906 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3042c469-edf3-4435-926d-5c24d1307212" containerName="extract-content" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.146172 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3042c469-edf3-4435-926d-5c24d1307212" containerName="registry-server" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.146971 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537930-4m2px" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.150320 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.150438 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.150741 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.154757 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537930-4m2px"] Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.211036 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2pjm\" (UniqueName: \"kubernetes.io/projected/21b44dcf-9b64-423d-9b23-7f5ead262ae4-kube-api-access-n2pjm\") pod \"auto-csr-approver-29537930-4m2px\" (UID: \"21b44dcf-9b64-423d-9b23-7f5ead262ae4\") " pod="openshift-infra/auto-csr-approver-29537930-4m2px" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.313160 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2pjm\" (UniqueName: \"kubernetes.io/projected/21b44dcf-9b64-423d-9b23-7f5ead262ae4-kube-api-access-n2pjm\") pod \"auto-csr-approver-29537930-4m2px\" (UID: \"21b44dcf-9b64-423d-9b23-7f5ead262ae4\") " pod="openshift-infra/auto-csr-approver-29537930-4m2px" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.336873 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2pjm\" (UniqueName: \"kubernetes.io/projected/21b44dcf-9b64-423d-9b23-7f5ead262ae4-kube-api-access-n2pjm\") pod \"auto-csr-approver-29537930-4m2px\" (UID: \"21b44dcf-9b64-423d-9b23-7f5ead262ae4\") " pod="openshift-infra/auto-csr-approver-29537930-4m2px" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.474772 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537930-4m2px" Feb 28 10:50:00 crc kubenswrapper[4996]: I0228 10:50:00.952493 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537930-4m2px"] Feb 28 10:50:01 crc kubenswrapper[4996]: I0228 10:50:01.292063 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537930-4m2px" event={"ID":"21b44dcf-9b64-423d-9b23-7f5ead262ae4","Type":"ContainerStarted","Data":"1d46c0567c9ce819147407e4ec7184c420f92458cc711eeefb115a7fa10facc7"} Feb 28 10:50:02 crc kubenswrapper[4996]: I0228 10:50:02.303838 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537930-4m2px" event={"ID":"21b44dcf-9b64-423d-9b23-7f5ead262ae4","Type":"ContainerStarted","Data":"b782dbdf94e2f00cbe01fc35f5b46ef1303286421357d71d0dd01196e5664ca6"} Feb 28 10:50:02 crc kubenswrapper[4996]: I0228 10:50:02.327980 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537930-4m2px" podStartSLOduration=1.451262819 podStartE2EDuration="2.327956771s" podCreationTimestamp="2026-02-28 10:50:00 +0000 UTC" firstStartedPulling="2026-02-28 10:50:00.950493389 +0000 UTC m=+6564.641296200" lastFinishedPulling="2026-02-28 10:50:01.827187341 +0000 UTC m=+6565.517990152" observedRunningTime="2026-02-28 10:50:02.316278486 +0000 UTC m=+6566.007081317" watchObservedRunningTime="2026-02-28 10:50:02.327956771 +0000 UTC m=+6566.018759582" Feb 28 10:50:03 crc kubenswrapper[4996]: I0228 10:50:03.314667 4996 generic.go:334] "Generic (PLEG): container finished" podID="21b44dcf-9b64-423d-9b23-7f5ead262ae4" containerID="b782dbdf94e2f00cbe01fc35f5b46ef1303286421357d71d0dd01196e5664ca6" exitCode=0 Feb 28 10:50:03 crc kubenswrapper[4996]: I0228 10:50:03.314741 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537930-4m2px" event={"ID":"21b44dcf-9b64-423d-9b23-7f5ead262ae4","Type":"ContainerDied","Data":"b782dbdf94e2f00cbe01fc35f5b46ef1303286421357d71d0dd01196e5664ca6"} Feb 28 10:50:04 crc kubenswrapper[4996]: I0228 10:50:04.792609 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537930-4m2px" Feb 28 10:50:04 crc kubenswrapper[4996]: I0228 10:50:04.908763 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2pjm\" (UniqueName: \"kubernetes.io/projected/21b44dcf-9b64-423d-9b23-7f5ead262ae4-kube-api-access-n2pjm\") pod \"21b44dcf-9b64-423d-9b23-7f5ead262ae4\" (UID: \"21b44dcf-9b64-423d-9b23-7f5ead262ae4\") " Feb 28 10:50:04 crc kubenswrapper[4996]: I0228 10:50:04.915448 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21b44dcf-9b64-423d-9b23-7f5ead262ae4-kube-api-access-n2pjm" (OuterVolumeSpecName: "kube-api-access-n2pjm") pod "21b44dcf-9b64-423d-9b23-7f5ead262ae4" (UID: "21b44dcf-9b64-423d-9b23-7f5ead262ae4"). InnerVolumeSpecName "kube-api-access-n2pjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:50:05 crc kubenswrapper[4996]: I0228 10:50:05.011959 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2pjm\" (UniqueName: \"kubernetes.io/projected/21b44dcf-9b64-423d-9b23-7f5ead262ae4-kube-api-access-n2pjm\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:05 crc kubenswrapper[4996]: I0228 10:50:05.343887 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537930-4m2px" event={"ID":"21b44dcf-9b64-423d-9b23-7f5ead262ae4","Type":"ContainerDied","Data":"1d46c0567c9ce819147407e4ec7184c420f92458cc711eeefb115a7fa10facc7"} Feb 28 10:50:05 crc kubenswrapper[4996]: I0228 10:50:05.343945 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d46c0567c9ce819147407e4ec7184c420f92458cc711eeefb115a7fa10facc7" Feb 28 10:50:05 crc kubenswrapper[4996]: I0228 10:50:05.344075 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537930-4m2px" Feb 28 10:50:05 crc kubenswrapper[4996]: I0228 10:50:05.396447 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537924-8dp2k"] Feb 28 10:50:05 crc kubenswrapper[4996]: I0228 10:50:05.405418 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537924-8dp2k"] Feb 28 10:50:07 crc kubenswrapper[4996]: I0228 10:50:07.042580 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68bf7a0-d982-45d6-91d0-fbd958919469" path="/var/lib/kubelet/pods/e68bf7a0-d982-45d6-91d0-fbd958919469/volumes" Feb 28 10:50:15 crc kubenswrapper[4996]: I0228 10:50:15.034570 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:50:15 crc kubenswrapper[4996]: E0228 10:50:15.035487 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:50:29 crc kubenswrapper[4996]: I0228 10:50:29.033960 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:50:29 crc kubenswrapper[4996]: E0228 10:50:29.034838 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:50:43 crc kubenswrapper[4996]: I0228 10:50:43.033199 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:50:43 crc kubenswrapper[4996]: E0228 10:50:43.033890 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:50:50 crc kubenswrapper[4996]: I0228 10:50:50.052921 4996 scope.go:117] "RemoveContainer" containerID="51c6535143120e017677c5601d08b45de0da0f8b40c843ed572fcd539fae707f" Feb 28 10:50:50 crc kubenswrapper[4996]: I0228 10:50:50.723810 4996 generic.go:334] "Generic (PLEG): container finished" podID="5f62e7a0-18c6-441e-8804-4760a6dd1efc" containerID="0a84d72e98f18fee94ce465c4857e3cd008738d0f98edd0efdf028012cba9110" exitCode=0 Feb 28 10:50:50 crc kubenswrapper[4996]: I0228 10:50:50.724118 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"5f62e7a0-18c6-441e-8804-4760a6dd1efc","Type":"ContainerDied","Data":"0a84d72e98f18fee94ce465c4857e3cd008738d0f98edd0efdf028012cba9110"} Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.449755 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.599359 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Feb 28 10:50:52 crc kubenswrapper[4996]: E0228 10:50:52.600110 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f62e7a0-18c6-441e-8804-4760a6dd1efc" containerName="tempest-tests-tempest-tests-runner" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.600133 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f62e7a0-18c6-441e-8804-4760a6dd1efc" containerName="tempest-tests-tempest-tests-runner" Feb 28 10:50:52 crc kubenswrapper[4996]: E0228 10:50:52.600154 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21b44dcf-9b64-423d-9b23-7f5ead262ae4" containerName="oc" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.600162 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="21b44dcf-9b64-423d-9b23-7f5ead262ae4" containerName="oc" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.600428 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="21b44dcf-9b64-423d-9b23-7f5ead262ae4" containerName="oc" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.600451 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f62e7a0-18c6-441e-8804-4760a6dd1efc" containerName="tempest-tests-tempest-tests-runner" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.601116 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.603693 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.603888 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.615417 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.619260 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.619304 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b7z4\" (UniqueName: \"kubernetes.io/projected/5f62e7a0-18c6-441e-8804-4760a6dd1efc-kube-api-access-8b7z4\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.619324 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-config-data\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.619362 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-temporary\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.619882 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.620415 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-config-data" (OuterVolumeSpecName: "config-data") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.620468 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.620614 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ssh-key\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.620900 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ca-certs\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.621026 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-workdir\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.621050 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config-secret\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.621089 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ceph\") pod \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\" (UID: \"5f62e7a0-18c6-441e-8804-4760a6dd1efc\") " Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.621406 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.621584 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.621617 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.621741 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.621758 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.628589 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ceph" (OuterVolumeSpecName: "ceph") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.636629 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.639571 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f62e7a0-18c6-441e-8804-4760a6dd1efc-kube-api-access-8b7z4" (OuterVolumeSpecName: "kube-api-access-8b7z4") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "kube-api-access-8b7z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.647035 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.647937 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.649070 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.654714 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.693039 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5f62e7a0-18c6-441e-8804-4760a6dd1efc" (UID: "5f62e7a0-18c6-441e-8804-4760a6dd1efc"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.729745 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.729806 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.729828 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.729853 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.729893 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq2x9\" (UniqueName: \"kubernetes.io/projected/af137d17-a90e-42ea-8e73-3dba0196c670-kube-api-access-jq2x9\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730141 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730260 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730295 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730381 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730405 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730590 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5f62e7a0-18c6-441e-8804-4760a6dd1efc-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730603 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730614 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730623 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5f62e7a0-18c6-441e-8804-4760a6dd1efc-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730631 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b7z4\" (UniqueName: \"kubernetes.io/projected/5f62e7a0-18c6-441e-8804-4760a6dd1efc-kube-api-access-8b7z4\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730643 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730652 4996 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5f62e7a0-18c6-441e-8804-4760a6dd1efc-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.730843 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.731351 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.734316 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.741853 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"5f62e7a0-18c6-441e-8804-4760a6dd1efc","Type":"ContainerDied","Data":"69f91a25201472323c99f67cc9e825199f9c0e2884f4913a0c40c6b39f1af2f9"} Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.741897 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69f91a25201472323c99f67cc9e825199f9c0e2884f4913a0c40c6b39f1af2f9" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.741943 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.763439 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.832553 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.832636 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.832805 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.832864 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.832900 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq2x9\" (UniqueName: \"kubernetes.io/projected/af137d17-a90e-42ea-8e73-3dba0196c670-kube-api-access-jq2x9\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.833026 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.833056 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.833554 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.836454 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.837322 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.838437 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.849657 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq2x9\" (UniqueName: \"kubernetes.io/projected/af137d17-a90e-42ea-8e73-3dba0196c670-kube-api-access-jq2x9\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:52 crc kubenswrapper[4996]: I0228 10:50:52.926089 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:50:53 crc kubenswrapper[4996]: I0228 10:50:53.461637 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Feb 28 10:50:53 crc kubenswrapper[4996]: I0228 10:50:53.751114 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"af137d17-a90e-42ea-8e73-3dba0196c670","Type":"ContainerStarted","Data":"e7ff612e0b75a5ef2247e82cd24f2ac9b88e1a001ad42235a1939ab03f862a93"} Feb 28 10:50:55 crc kubenswrapper[4996]: I0228 10:50:55.785292 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"af137d17-a90e-42ea-8e73-3dba0196c670","Type":"ContainerStarted","Data":"37ccf4adad8e7601e917d0e0bbe9b0405790deb7146a77eb9734427081f2c65a"} Feb 28 10:50:55 crc kubenswrapper[4996]: I0228 10:50:55.808222 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-test" podStartSLOduration=3.808193502 podStartE2EDuration="3.808193502s" podCreationTimestamp="2026-02-28 10:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 10:50:55.800727489 +0000 UTC m=+6619.491530300" watchObservedRunningTime="2026-02-28 10:50:55.808193502 +0000 UTC m=+6619.498996343" Feb 28 10:50:56 crc kubenswrapper[4996]: I0228 10:50:56.033698 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:50:56 crc kubenswrapper[4996]: E0228 10:50:56.034032 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:51:10 crc kubenswrapper[4996]: I0228 10:51:10.033538 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:51:10 crc kubenswrapper[4996]: E0228 10:51:10.034473 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:51:25 crc kubenswrapper[4996]: I0228 10:51:25.034023 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:51:25 crc kubenswrapper[4996]: E0228 10:51:25.034726 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:51:36 crc kubenswrapper[4996]: I0228 10:51:36.032754 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:51:36 crc kubenswrapper[4996]: E0228 10:51:36.033761 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.733282 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-88f4b"] Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.738843 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.755347 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88f4b"] Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.836671 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrxf\" (UniqueName: \"kubernetes.io/projected/b07ad513-2f82-45db-b17c-46cf0c4c3e39-kube-api-access-frrxf\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.836790 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-utilities\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.837142 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-catalog-content\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.938883 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrxf\" (UniqueName: \"kubernetes.io/projected/b07ad513-2f82-45db-b17c-46cf0c4c3e39-kube-api-access-frrxf\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.938933 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-utilities\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.939037 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-catalog-content\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.939465 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-catalog-content\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.939581 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-utilities\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:40 crc kubenswrapper[4996]: I0228 10:51:40.963023 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrxf\" (UniqueName: \"kubernetes.io/projected/b07ad513-2f82-45db-b17c-46cf0c4c3e39-kube-api-access-frrxf\") pod \"community-operators-88f4b\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:41 crc kubenswrapper[4996]: I0228 10:51:41.074730 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:41 crc kubenswrapper[4996]: W0228 10:51:41.658335 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb07ad513_2f82_45db_b17c_46cf0c4c3e39.slice/crio-946c18ebc9499dda28694a718edffab322b582e8732db0f7bf642ff33959130c WatchSource:0}: Error finding container 946c18ebc9499dda28694a718edffab322b582e8732db0f7bf642ff33959130c: Status 404 returned error can't find the container with id 946c18ebc9499dda28694a718edffab322b582e8732db0f7bf642ff33959130c Feb 28 10:51:41 crc kubenswrapper[4996]: I0228 10:51:41.660490 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-88f4b"] Feb 28 10:51:42 crc kubenswrapper[4996]: I0228 10:51:42.262349 4996 generic.go:334] "Generic (PLEG): container finished" podID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerID="c84b5957f2d35dd9c22032b27cc3022f3ad88bfbf916134a9a731fcc1e06dfd6" exitCode=0 Feb 28 10:51:42 crc kubenswrapper[4996]: I0228 10:51:42.262404 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88f4b" event={"ID":"b07ad513-2f82-45db-b17c-46cf0c4c3e39","Type":"ContainerDied","Data":"c84b5957f2d35dd9c22032b27cc3022f3ad88bfbf916134a9a731fcc1e06dfd6"} Feb 28 10:51:42 crc kubenswrapper[4996]: I0228 10:51:42.262785 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88f4b" event={"ID":"b07ad513-2f82-45db-b17c-46cf0c4c3e39","Type":"ContainerStarted","Data":"946c18ebc9499dda28694a718edffab322b582e8732db0f7bf642ff33959130c"} Feb 28 10:51:43 crc kubenswrapper[4996]: I0228 10:51:43.276284 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88f4b" event={"ID":"b07ad513-2f82-45db-b17c-46cf0c4c3e39","Type":"ContainerStarted","Data":"70d80e6c1991865bd8da2f2fec418221bde5229961106b59c029991a61cac683"} Feb 28 10:51:44 crc kubenswrapper[4996]: I0228 10:51:44.289265 4996 generic.go:334] "Generic (PLEG): container finished" podID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerID="70d80e6c1991865bd8da2f2fec418221bde5229961106b59c029991a61cac683" exitCode=0 Feb 28 10:51:44 crc kubenswrapper[4996]: I0228 10:51:44.289326 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88f4b" event={"ID":"b07ad513-2f82-45db-b17c-46cf0c4c3e39","Type":"ContainerDied","Data":"70d80e6c1991865bd8da2f2fec418221bde5229961106b59c029991a61cac683"} Feb 28 10:51:45 crc kubenswrapper[4996]: I0228 10:51:45.299884 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88f4b" event={"ID":"b07ad513-2f82-45db-b17c-46cf0c4c3e39","Type":"ContainerStarted","Data":"3196e7ebdee521cd39eaa96d817f8d18988984c9bdaec4e60d6242a525ee471d"} Feb 28 10:51:45 crc kubenswrapper[4996]: I0228 10:51:45.325232 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-88f4b" podStartSLOduration=2.904070759 podStartE2EDuration="5.325206947s" podCreationTimestamp="2026-02-28 10:51:40 +0000 UTC" firstStartedPulling="2026-02-28 10:51:42.26382162 +0000 UTC m=+6665.954624431" lastFinishedPulling="2026-02-28 10:51:44.684957778 +0000 UTC m=+6668.375760619" observedRunningTime="2026-02-28 10:51:45.318847251 +0000 UTC m=+6669.009650062" watchObservedRunningTime="2026-02-28 10:51:45.325206947 +0000 UTC m=+6669.016009758" Feb 28 10:51:48 crc kubenswrapper[4996]: I0228 10:51:48.033312 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:51:48 crc kubenswrapper[4996]: E0228 10:51:48.034138 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:51:51 crc kubenswrapper[4996]: I0228 10:51:51.075302 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:51 crc kubenswrapper[4996]: I0228 10:51:51.077393 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:51 crc kubenswrapper[4996]: I0228 10:51:51.148661 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:51 crc kubenswrapper[4996]: I0228 10:51:51.423366 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:51 crc kubenswrapper[4996]: I0228 10:51:51.473892 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88f4b"] Feb 28 10:51:53 crc kubenswrapper[4996]: I0228 10:51:53.375161 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-88f4b" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerName="registry-server" containerID="cri-o://3196e7ebdee521cd39eaa96d817f8d18988984c9bdaec4e60d6242a525ee471d" gracePeriod=2 Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.388737 4996 generic.go:334] "Generic (PLEG): container finished" podID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerID="3196e7ebdee521cd39eaa96d817f8d18988984c9bdaec4e60d6242a525ee471d" exitCode=0 Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.388829 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88f4b" event={"ID":"b07ad513-2f82-45db-b17c-46cf0c4c3e39","Type":"ContainerDied","Data":"3196e7ebdee521cd39eaa96d817f8d18988984c9bdaec4e60d6242a525ee471d"} Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.389374 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-88f4b" event={"ID":"b07ad513-2f82-45db-b17c-46cf0c4c3e39","Type":"ContainerDied","Data":"946c18ebc9499dda28694a718edffab322b582e8732db0f7bf642ff33959130c"} Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.389400 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="946c18ebc9499dda28694a718edffab322b582e8732db0f7bf642ff33959130c" Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.410256 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.557307 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-utilities\") pod \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.557384 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frrxf\" (UniqueName: \"kubernetes.io/projected/b07ad513-2f82-45db-b17c-46cf0c4c3e39-kube-api-access-frrxf\") pod \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.557478 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-catalog-content\") pod \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\" (UID: \"b07ad513-2f82-45db-b17c-46cf0c4c3e39\") " Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.558585 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-utilities" (OuterVolumeSpecName: "utilities") pod "b07ad513-2f82-45db-b17c-46cf0c4c3e39" (UID: "b07ad513-2f82-45db-b17c-46cf0c4c3e39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.568244 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b07ad513-2f82-45db-b17c-46cf0c4c3e39-kube-api-access-frrxf" (OuterVolumeSpecName: "kube-api-access-frrxf") pod "b07ad513-2f82-45db-b17c-46cf0c4c3e39" (UID: "b07ad513-2f82-45db-b17c-46cf0c4c3e39"). InnerVolumeSpecName "kube-api-access-frrxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.617239 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b07ad513-2f82-45db-b17c-46cf0c4c3e39" (UID: "b07ad513-2f82-45db-b17c-46cf0c4c3e39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.660056 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.660090 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frrxf\" (UniqueName: \"kubernetes.io/projected/b07ad513-2f82-45db-b17c-46cf0c4c3e39-kube-api-access-frrxf\") on node \"crc\" DevicePath \"\"" Feb 28 10:51:54 crc kubenswrapper[4996]: I0228 10:51:54.660100 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b07ad513-2f82-45db-b17c-46cf0c4c3e39-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:51:55 crc kubenswrapper[4996]: I0228 10:51:55.403380 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-88f4b" Feb 28 10:51:55 crc kubenswrapper[4996]: I0228 10:51:55.433125 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-88f4b"] Feb 28 10:51:55 crc kubenswrapper[4996]: I0228 10:51:55.446027 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-88f4b"] Feb 28 10:51:57 crc kubenswrapper[4996]: I0228 10:51:57.048428 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" path="/var/lib/kubelet/pods/b07ad513-2f82-45db-b17c-46cf0c4c3e39/volumes" Feb 28 10:51:59 crc kubenswrapper[4996]: I0228 10:51:59.033734 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:51:59 crc kubenswrapper[4996]: E0228 10:51:59.034517 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.157827 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537932-5qbbl"] Feb 28 10:52:00 crc kubenswrapper[4996]: E0228 10:52:00.158353 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerName="extract-utilities" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.158368 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerName="extract-utilities" Feb 28 10:52:00 crc kubenswrapper[4996]: E0228 10:52:00.158412 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerName="registry-server" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.158421 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerName="registry-server" Feb 28 10:52:00 crc kubenswrapper[4996]: E0228 10:52:00.158445 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerName="extract-content" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.158454 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerName="extract-content" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.158710 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b07ad513-2f82-45db-b17c-46cf0c4c3e39" containerName="registry-server" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.159522 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537932-5qbbl" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.161822 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.162032 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.163447 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.170785 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537932-5qbbl"] Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.291102 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlkxm\" (UniqueName: \"kubernetes.io/projected/42f05a25-8a05-441e-afe0-2293b23f69c8-kube-api-access-nlkxm\") pod \"auto-csr-approver-29537932-5qbbl\" (UID: \"42f05a25-8a05-441e-afe0-2293b23f69c8\") " pod="openshift-infra/auto-csr-approver-29537932-5qbbl" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.394162 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlkxm\" (UniqueName: \"kubernetes.io/projected/42f05a25-8a05-441e-afe0-2293b23f69c8-kube-api-access-nlkxm\") pod \"auto-csr-approver-29537932-5qbbl\" (UID: \"42f05a25-8a05-441e-afe0-2293b23f69c8\") " pod="openshift-infra/auto-csr-approver-29537932-5qbbl" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.417475 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlkxm\" (UniqueName: \"kubernetes.io/projected/42f05a25-8a05-441e-afe0-2293b23f69c8-kube-api-access-nlkxm\") pod \"auto-csr-approver-29537932-5qbbl\" (UID: \"42f05a25-8a05-441e-afe0-2293b23f69c8\") " pod="openshift-infra/auto-csr-approver-29537932-5qbbl" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.482731 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537932-5qbbl" Feb 28 10:52:00 crc kubenswrapper[4996]: I0228 10:52:00.963560 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537932-5qbbl"] Feb 28 10:52:01 crc kubenswrapper[4996]: I0228 10:52:01.591552 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537932-5qbbl" event={"ID":"42f05a25-8a05-441e-afe0-2293b23f69c8","Type":"ContainerStarted","Data":"c6a89491927cbde8926978451b9f95ba7845cf4457502135fecf4cd50b5a34cf"} Feb 28 10:52:02 crc kubenswrapper[4996]: I0228 10:52:02.606785 4996 generic.go:334] "Generic (PLEG): container finished" podID="42f05a25-8a05-441e-afe0-2293b23f69c8" containerID="d0450795667c8db77d886bb407eb7a6329072eec12810280384d2e51d3f030a8" exitCode=0 Feb 28 10:52:02 crc kubenswrapper[4996]: I0228 10:52:02.606882 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537932-5qbbl" event={"ID":"42f05a25-8a05-441e-afe0-2293b23f69c8","Type":"ContainerDied","Data":"d0450795667c8db77d886bb407eb7a6329072eec12810280384d2e51d3f030a8"} Feb 28 10:52:03 crc kubenswrapper[4996]: I0228 10:52:03.904875 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537932-5qbbl" Feb 28 10:52:03 crc kubenswrapper[4996]: I0228 10:52:03.964195 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlkxm\" (UniqueName: \"kubernetes.io/projected/42f05a25-8a05-441e-afe0-2293b23f69c8-kube-api-access-nlkxm\") pod \"42f05a25-8a05-441e-afe0-2293b23f69c8\" (UID: \"42f05a25-8a05-441e-afe0-2293b23f69c8\") " Feb 28 10:52:03 crc kubenswrapper[4996]: I0228 10:52:03.970917 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f05a25-8a05-441e-afe0-2293b23f69c8-kube-api-access-nlkxm" (OuterVolumeSpecName: "kube-api-access-nlkxm") pod "42f05a25-8a05-441e-afe0-2293b23f69c8" (UID: "42f05a25-8a05-441e-afe0-2293b23f69c8"). InnerVolumeSpecName "kube-api-access-nlkxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:52:04 crc kubenswrapper[4996]: I0228 10:52:04.067076 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlkxm\" (UniqueName: \"kubernetes.io/projected/42f05a25-8a05-441e-afe0-2293b23f69c8-kube-api-access-nlkxm\") on node \"crc\" DevicePath \"\"" Feb 28 10:52:04 crc kubenswrapper[4996]: I0228 10:52:04.626321 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537932-5qbbl" event={"ID":"42f05a25-8a05-441e-afe0-2293b23f69c8","Type":"ContainerDied","Data":"c6a89491927cbde8926978451b9f95ba7845cf4457502135fecf4cd50b5a34cf"} Feb 28 10:52:04 crc kubenswrapper[4996]: I0228 10:52:04.626373 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6a89491927cbde8926978451b9f95ba7845cf4457502135fecf4cd50b5a34cf" Feb 28 10:52:04 crc kubenswrapper[4996]: I0228 10:52:04.626427 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537932-5qbbl" Feb 28 10:52:04 crc kubenswrapper[4996]: I0228 10:52:04.993245 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537926-btmrz"] Feb 28 10:52:05 crc kubenswrapper[4996]: I0228 10:52:05.003157 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537926-btmrz"] Feb 28 10:52:05 crc kubenswrapper[4996]: I0228 10:52:05.048395 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0e47966-231a-4b95-8185-2fef02afdc8e" path="/var/lib/kubelet/pods/c0e47966-231a-4b95-8185-2fef02afdc8e/volumes" Feb 28 10:52:10 crc kubenswrapper[4996]: I0228 10:52:10.032980 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:52:10 crc kubenswrapper[4996]: E0228 10:52:10.033898 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:52:21 crc kubenswrapper[4996]: I0228 10:52:21.034073 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:52:21 crc kubenswrapper[4996]: E0228 10:52:21.035761 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:52:35 crc kubenswrapper[4996]: I0228 10:52:35.033954 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:52:35 crc kubenswrapper[4996]: E0228 10:52:35.035358 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:52:50 crc kubenswrapper[4996]: I0228 10:52:50.033199 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:52:50 crc kubenswrapper[4996]: E0228 10:52:50.033976 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:52:50 crc kubenswrapper[4996]: I0228 10:52:50.152718 4996 scope.go:117] "RemoveContainer" containerID="7a137ced98c34e0d0a6b21e3485e8c3f74338a4150eff22de2a1cf8446da921b" Feb 28 10:53:03 crc kubenswrapper[4996]: I0228 10:53:03.033874 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:53:03 crc kubenswrapper[4996]: E0228 10:53:03.034735 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:53:14 crc kubenswrapper[4996]: I0228 10:53:14.032496 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:53:14 crc kubenswrapper[4996]: E0228 10:53:14.033309 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:53:28 crc kubenswrapper[4996]: I0228 10:53:28.033551 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:53:28 crc kubenswrapper[4996]: E0228 10:53:28.034346 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.025554 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7zq8d"] Feb 28 10:53:36 crc kubenswrapper[4996]: E0228 10:53:36.026452 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f05a25-8a05-441e-afe0-2293b23f69c8" containerName="oc" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.026464 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f05a25-8a05-441e-afe0-2293b23f69c8" containerName="oc" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.026663 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f05a25-8a05-441e-afe0-2293b23f69c8" containerName="oc" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.027966 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.062262 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zq8d"] Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.087907 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-catalog-content\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.088733 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4pw2\" (UniqueName: \"kubernetes.io/projected/2b6b4865-023d-442d-bab8-29c686ba23f6-kube-api-access-x4pw2\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.088930 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-utilities\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.192512 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-catalog-content\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.192630 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4pw2\" (UniqueName: \"kubernetes.io/projected/2b6b4865-023d-442d-bab8-29c686ba23f6-kube-api-access-x4pw2\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.192705 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-utilities\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.193073 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-catalog-content\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.193291 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-utilities\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.216660 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4pw2\" (UniqueName: \"kubernetes.io/projected/2b6b4865-023d-442d-bab8-29c686ba23f6-kube-api-access-x4pw2\") pod \"redhat-marketplace-7zq8d\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.385708 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:36 crc kubenswrapper[4996]: I0228 10:53:36.894837 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zq8d"] Feb 28 10:53:37 crc kubenswrapper[4996]: I0228 10:53:37.526254 4996 generic.go:334] "Generic (PLEG): container finished" podID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerID="3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e" exitCode=0 Feb 28 10:53:37 crc kubenswrapper[4996]: I0228 10:53:37.526381 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zq8d" event={"ID":"2b6b4865-023d-442d-bab8-29c686ba23f6","Type":"ContainerDied","Data":"3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e"} Feb 28 10:53:37 crc kubenswrapper[4996]: I0228 10:53:37.526613 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zq8d" event={"ID":"2b6b4865-023d-442d-bab8-29c686ba23f6","Type":"ContainerStarted","Data":"222ae50a5973697dcbbe109931bfcec7ebce4dd6728c3d2a40177bca2c3c5c3a"} Feb 28 10:53:37 crc kubenswrapper[4996]: I0228 10:53:37.530051 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:53:38 crc kubenswrapper[4996]: I0228 10:53:38.537277 4996 generic.go:334] "Generic (PLEG): container finished" podID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerID="0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32" exitCode=0 Feb 28 10:53:38 crc kubenswrapper[4996]: I0228 10:53:38.537405 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zq8d" event={"ID":"2b6b4865-023d-442d-bab8-29c686ba23f6","Type":"ContainerDied","Data":"0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32"} Feb 28 10:53:39 crc kubenswrapper[4996]: I0228 10:53:39.033174 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:53:39 crc kubenswrapper[4996]: E0228 10:53:39.033521 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 10:53:39 crc kubenswrapper[4996]: I0228 10:53:39.550522 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zq8d" event={"ID":"2b6b4865-023d-442d-bab8-29c686ba23f6","Type":"ContainerStarted","Data":"43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35"} Feb 28 10:53:39 crc kubenswrapper[4996]: I0228 10:53:39.575612 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7zq8d" podStartSLOduration=3.092632987 podStartE2EDuration="4.575580306s" podCreationTimestamp="2026-02-28 10:53:35 +0000 UTC" firstStartedPulling="2026-02-28 10:53:37.529792335 +0000 UTC m=+6781.220595146" lastFinishedPulling="2026-02-28 10:53:39.012739634 +0000 UTC m=+6782.703542465" observedRunningTime="2026-02-28 10:53:39.572694925 +0000 UTC m=+6783.263497826" watchObservedRunningTime="2026-02-28 10:53:39.575580306 +0000 UTC m=+6783.266383117" Feb 28 10:53:46 crc kubenswrapper[4996]: I0228 10:53:46.386851 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:46 crc kubenswrapper[4996]: I0228 10:53:46.387432 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:46 crc kubenswrapper[4996]: I0228 10:53:46.438231 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:46 crc kubenswrapper[4996]: I0228 10:53:46.701407 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:46 crc kubenswrapper[4996]: I0228 10:53:46.753998 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zq8d"] Feb 28 10:53:48 crc kubenswrapper[4996]: I0228 10:53:48.651311 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7zq8d" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerName="registry-server" containerID="cri-o://43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35" gracePeriod=2 Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.601886 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.667772 4996 generic.go:334] "Generic (PLEG): container finished" podID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerID="43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35" exitCode=0 Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.667817 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zq8d" event={"ID":"2b6b4865-023d-442d-bab8-29c686ba23f6","Type":"ContainerDied","Data":"43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35"} Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.667874 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7zq8d" event={"ID":"2b6b4865-023d-442d-bab8-29c686ba23f6","Type":"ContainerDied","Data":"222ae50a5973697dcbbe109931bfcec7ebce4dd6728c3d2a40177bca2c3c5c3a"} Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.667892 4996 scope.go:117] "RemoveContainer" containerID="43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.667892 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7zq8d" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.687596 4996 scope.go:117] "RemoveContainer" containerID="0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.704303 4996 scope.go:117] "RemoveContainer" containerID="3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.759969 4996 scope.go:117] "RemoveContainer" containerID="43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35" Feb 28 10:53:49 crc kubenswrapper[4996]: E0228 10:53:49.760527 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35\": container with ID starting with 43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35 not found: ID does not exist" containerID="43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.760575 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35"} err="failed to get container status \"43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35\": rpc error: code = NotFound desc = could not find container \"43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35\": container with ID starting with 43b11d60c524057465576fb2068795b87b6b20e7bd2af16e33b068df485f3b35 not found: ID does not exist" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.760629 4996 scope.go:117] "RemoveContainer" containerID="0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32" Feb 28 10:53:49 crc kubenswrapper[4996]: E0228 10:53:49.760968 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32\": container with ID starting with 0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32 not found: ID does not exist" containerID="0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.760996 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32"} err="failed to get container status \"0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32\": rpc error: code = NotFound desc = could not find container \"0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32\": container with ID starting with 0a51e463a2bacef987671e6f61f06569e86546e8126e64dbe7c8aca5278a1d32 not found: ID does not exist" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.761039 4996 scope.go:117] "RemoveContainer" containerID="3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e" Feb 28 10:53:49 crc kubenswrapper[4996]: E0228 10:53:49.761292 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e\": container with ID starting with 3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e not found: ID does not exist" containerID="3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.761350 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e"} err="failed to get container status \"3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e\": rpc error: code = NotFound desc = could not find container \"3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e\": container with ID starting with 3067258f60244d9d9e1bcc6b4c5c08c8dba0ea14294aec18b4823ef9b62dec8e not found: ID does not exist" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.770074 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-utilities\") pod \"2b6b4865-023d-442d-bab8-29c686ba23f6\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.770210 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4pw2\" (UniqueName: \"kubernetes.io/projected/2b6b4865-023d-442d-bab8-29c686ba23f6-kube-api-access-x4pw2\") pod \"2b6b4865-023d-442d-bab8-29c686ba23f6\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.770317 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-catalog-content\") pod \"2b6b4865-023d-442d-bab8-29c686ba23f6\" (UID: \"2b6b4865-023d-442d-bab8-29c686ba23f6\") " Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.772098 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-utilities" (OuterVolumeSpecName: "utilities") pod "2b6b4865-023d-442d-bab8-29c686ba23f6" (UID: "2b6b4865-023d-442d-bab8-29c686ba23f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.778141 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b6b4865-023d-442d-bab8-29c686ba23f6-kube-api-access-x4pw2" (OuterVolumeSpecName: "kube-api-access-x4pw2") pod "2b6b4865-023d-442d-bab8-29c686ba23f6" (UID: "2b6b4865-023d-442d-bab8-29c686ba23f6"). InnerVolumeSpecName "kube-api-access-x4pw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.800250 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b6b4865-023d-442d-bab8-29c686ba23f6" (UID: "2b6b4865-023d-442d-bab8-29c686ba23f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.871997 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.872060 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b6b4865-023d-442d-bab8-29c686ba23f6-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:53:49 crc kubenswrapper[4996]: I0228 10:53:49.872075 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4pw2\" (UniqueName: \"kubernetes.io/projected/2b6b4865-023d-442d-bab8-29c686ba23f6-kube-api-access-x4pw2\") on node \"crc\" DevicePath \"\"" Feb 28 10:53:50 crc kubenswrapper[4996]: I0228 10:53:50.002247 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zq8d"] Feb 28 10:53:50 crc kubenswrapper[4996]: I0228 10:53:50.010556 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7zq8d"] Feb 28 10:53:51 crc kubenswrapper[4996]: I0228 10:53:51.046616 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" path="/var/lib/kubelet/pods/2b6b4865-023d-442d-bab8-29c686ba23f6/volumes" Feb 28 10:53:54 crc kubenswrapper[4996]: I0228 10:53:54.032809 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:53:54 crc kubenswrapper[4996]: I0228 10:53:54.723613 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"2d7749c606980be590cdf9e1ba9745d9acb89d5faee0102da2aab1679aa836ba"} Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.538972 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bcc8t"] Feb 28 10:53:55 crc kubenswrapper[4996]: E0228 10:53:55.539738 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerName="extract-utilities" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.539756 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerName="extract-utilities" Feb 28 10:53:55 crc kubenswrapper[4996]: E0228 10:53:55.539780 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerName="registry-server" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.539788 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerName="registry-server" Feb 28 10:53:55 crc kubenswrapper[4996]: E0228 10:53:55.539803 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerName="extract-content" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.539811 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerName="extract-content" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.540049 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b6b4865-023d-442d-bab8-29c686ba23f6" containerName="registry-server" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.541554 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.553355 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcc8t"] Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.691931 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnbb\" (UniqueName: \"kubernetes.io/projected/ab8149bd-3293-446d-9bcf-d001f1d985ce-kube-api-access-gbnbb\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.692019 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-catalog-content\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.692208 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-utilities\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.794167 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-catalog-content\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.794345 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-utilities\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.794392 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnbb\" (UniqueName: \"kubernetes.io/projected/ab8149bd-3293-446d-9bcf-d001f1d985ce-kube-api-access-gbnbb\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.794670 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-catalog-content\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.795179 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-utilities\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.814873 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnbb\" (UniqueName: \"kubernetes.io/projected/ab8149bd-3293-446d-9bcf-d001f1d985ce-kube-api-access-gbnbb\") pod \"redhat-operators-bcc8t\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:55 crc kubenswrapper[4996]: I0228 10:53:55.865892 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:53:56 crc kubenswrapper[4996]: I0228 10:53:56.321038 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bcc8t"] Feb 28 10:53:56 crc kubenswrapper[4996]: I0228 10:53:56.755638 4996 generic.go:334] "Generic (PLEG): container finished" podID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerID="d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84" exitCode=0 Feb 28 10:53:56 crc kubenswrapper[4996]: I0228 10:53:56.755693 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcc8t" event={"ID":"ab8149bd-3293-446d-9bcf-d001f1d985ce","Type":"ContainerDied","Data":"d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84"} Feb 28 10:53:56 crc kubenswrapper[4996]: I0228 10:53:56.755961 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcc8t" event={"ID":"ab8149bd-3293-446d-9bcf-d001f1d985ce","Type":"ContainerStarted","Data":"0274fa88b853bec814b6d44088dd0721b0b5753ab7b4b1bcaae46f3bb4723c83"} Feb 28 10:53:57 crc kubenswrapper[4996]: I0228 10:53:57.766291 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcc8t" event={"ID":"ab8149bd-3293-446d-9bcf-d001f1d985ce","Type":"ContainerStarted","Data":"817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b"} Feb 28 10:53:59 crc kubenswrapper[4996]: I0228 10:53:59.785598 4996 generic.go:334] "Generic (PLEG): container finished" podID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerID="817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b" exitCode=0 Feb 28 10:53:59 crc kubenswrapper[4996]: I0228 10:53:59.785696 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcc8t" event={"ID":"ab8149bd-3293-446d-9bcf-d001f1d985ce","Type":"ContainerDied","Data":"817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b"} Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.158856 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537934-vk9p6"] Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.160941 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537934-vk9p6" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.164180 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.165295 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.168658 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.173519 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537934-vk9p6"] Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.191684 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc7tl\" (UniqueName: \"kubernetes.io/projected/2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa-kube-api-access-hc7tl\") pod \"auto-csr-approver-29537934-vk9p6\" (UID: \"2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa\") " pod="openshift-infra/auto-csr-approver-29537934-vk9p6" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.294847 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc7tl\" (UniqueName: \"kubernetes.io/projected/2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa-kube-api-access-hc7tl\") pod \"auto-csr-approver-29537934-vk9p6\" (UID: \"2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa\") " pod="openshift-infra/auto-csr-approver-29537934-vk9p6" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.329890 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc7tl\" (UniqueName: \"kubernetes.io/projected/2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa-kube-api-access-hc7tl\") pod \"auto-csr-approver-29537934-vk9p6\" (UID: \"2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa\") " pod="openshift-infra/auto-csr-approver-29537934-vk9p6" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.487106 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537934-vk9p6" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.798460 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcc8t" event={"ID":"ab8149bd-3293-446d-9bcf-d001f1d985ce","Type":"ContainerStarted","Data":"9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777"} Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.822442 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bcc8t" podStartSLOduration=2.394816537 podStartE2EDuration="5.822425808s" podCreationTimestamp="2026-02-28 10:53:55 +0000 UTC" firstStartedPulling="2026-02-28 10:53:56.757303616 +0000 UTC m=+6800.448106417" lastFinishedPulling="2026-02-28 10:54:00.184912867 +0000 UTC m=+6803.875715688" observedRunningTime="2026-02-28 10:54:00.817106348 +0000 UTC m=+6804.507909189" watchObservedRunningTime="2026-02-28 10:54:00.822425808 +0000 UTC m=+6804.513228619" Feb 28 10:54:00 crc kubenswrapper[4996]: I0228 10:54:00.936695 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537934-vk9p6"] Feb 28 10:54:01 crc kubenswrapper[4996]: I0228 10:54:01.808356 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537934-vk9p6" event={"ID":"2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa","Type":"ContainerStarted","Data":"3360824bcfb6807362aa8ddbddf3cd2982196d143c07e41f293a31a3e4a3e6dc"} Feb 28 10:54:02 crc kubenswrapper[4996]: I0228 10:54:02.817950 4996 generic.go:334] "Generic (PLEG): container finished" podID="2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa" containerID="cbbc5d76d391f3b4e754ff52037e856d743037826c0aeb804f67cc219767902c" exitCode=0 Feb 28 10:54:02 crc kubenswrapper[4996]: I0228 10:54:02.818163 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537934-vk9p6" event={"ID":"2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa","Type":"ContainerDied","Data":"cbbc5d76d391f3b4e754ff52037e856d743037826c0aeb804f67cc219767902c"} Feb 28 10:54:04 crc kubenswrapper[4996]: I0228 10:54:04.171507 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537934-vk9p6" Feb 28 10:54:04 crc kubenswrapper[4996]: I0228 10:54:04.365433 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc7tl\" (UniqueName: \"kubernetes.io/projected/2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa-kube-api-access-hc7tl\") pod \"2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa\" (UID: \"2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa\") " Feb 28 10:54:04 crc kubenswrapper[4996]: I0228 10:54:04.371596 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa-kube-api-access-hc7tl" (OuterVolumeSpecName: "kube-api-access-hc7tl") pod "2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa" (UID: "2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa"). InnerVolumeSpecName "kube-api-access-hc7tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:54:04 crc kubenswrapper[4996]: I0228 10:54:04.468153 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc7tl\" (UniqueName: \"kubernetes.io/projected/2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa-kube-api-access-hc7tl\") on node \"crc\" DevicePath \"\"" Feb 28 10:54:04 crc kubenswrapper[4996]: I0228 10:54:04.838396 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537934-vk9p6" event={"ID":"2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa","Type":"ContainerDied","Data":"3360824bcfb6807362aa8ddbddf3cd2982196d143c07e41f293a31a3e4a3e6dc"} Feb 28 10:54:04 crc kubenswrapper[4996]: I0228 10:54:04.838724 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3360824bcfb6807362aa8ddbddf3cd2982196d143c07e41f293a31a3e4a3e6dc" Feb 28 10:54:04 crc kubenswrapper[4996]: I0228 10:54:04.838515 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537934-vk9p6" Feb 28 10:54:05 crc kubenswrapper[4996]: I0228 10:54:05.256562 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537928-hn97g"] Feb 28 10:54:05 crc kubenswrapper[4996]: I0228 10:54:05.265771 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537928-hn97g"] Feb 28 10:54:05 crc kubenswrapper[4996]: I0228 10:54:05.866984 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:54:05 crc kubenswrapper[4996]: I0228 10:54:05.867042 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:54:06 crc kubenswrapper[4996]: I0228 10:54:06.912656 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bcc8t" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="registry-server" probeResult="failure" output=< Feb 28 10:54:06 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 10:54:06 crc kubenswrapper[4996]: > Feb 28 10:54:07 crc kubenswrapper[4996]: I0228 10:54:07.044692 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f0d826-2096-4b46-bac4-752699d121e0" path="/var/lib/kubelet/pods/67f0d826-2096-4b46-bac4-752699d121e0/volumes" Feb 28 10:54:15 crc kubenswrapper[4996]: I0228 10:54:15.914611 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:54:15 crc kubenswrapper[4996]: I0228 10:54:15.964194 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:54:16 crc kubenswrapper[4996]: I0228 10:54:16.153351 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcc8t"] Feb 28 10:54:16 crc kubenswrapper[4996]: I0228 10:54:16.947415 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bcc8t" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="registry-server" containerID="cri-o://9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777" gracePeriod=2 Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.470042 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.626101 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-catalog-content\") pod \"ab8149bd-3293-446d-9bcf-d001f1d985ce\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.626241 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbnbb\" (UniqueName: \"kubernetes.io/projected/ab8149bd-3293-446d-9bcf-d001f1d985ce-kube-api-access-gbnbb\") pod \"ab8149bd-3293-446d-9bcf-d001f1d985ce\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.626387 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-utilities\") pod \"ab8149bd-3293-446d-9bcf-d001f1d985ce\" (UID: \"ab8149bd-3293-446d-9bcf-d001f1d985ce\") " Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.627680 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-utilities" (OuterVolumeSpecName: "utilities") pod "ab8149bd-3293-446d-9bcf-d001f1d985ce" (UID: "ab8149bd-3293-446d-9bcf-d001f1d985ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.645087 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8149bd-3293-446d-9bcf-d001f1d985ce-kube-api-access-gbnbb" (OuterVolumeSpecName: "kube-api-access-gbnbb") pod "ab8149bd-3293-446d-9bcf-d001f1d985ce" (UID: "ab8149bd-3293-446d-9bcf-d001f1d985ce"). InnerVolumeSpecName "kube-api-access-gbnbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.729248 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.729288 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbnbb\" (UniqueName: \"kubernetes.io/projected/ab8149bd-3293-446d-9bcf-d001f1d985ce-kube-api-access-gbnbb\") on node \"crc\" DevicePath \"\"" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.806203 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab8149bd-3293-446d-9bcf-d001f1d985ce" (UID: "ab8149bd-3293-446d-9bcf-d001f1d985ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.831731 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab8149bd-3293-446d-9bcf-d001f1d985ce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.959164 4996 generic.go:334] "Generic (PLEG): container finished" podID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerID="9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777" exitCode=0 Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.959261 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcc8t" event={"ID":"ab8149bd-3293-446d-9bcf-d001f1d985ce","Type":"ContainerDied","Data":"9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777"} Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.959547 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bcc8t" event={"ID":"ab8149bd-3293-446d-9bcf-d001f1d985ce","Type":"ContainerDied","Data":"0274fa88b853bec814b6d44088dd0721b0b5753ab7b4b1bcaae46f3bb4723c83"} Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.959578 4996 scope.go:117] "RemoveContainer" containerID="9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.959296 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bcc8t" Feb 28 10:54:17 crc kubenswrapper[4996]: I0228 10:54:17.992987 4996 scope.go:117] "RemoveContainer" containerID="817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b" Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.002133 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bcc8t"] Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.016233 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bcc8t"] Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.019522 4996 scope.go:117] "RemoveContainer" containerID="d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84" Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.071433 4996 scope.go:117] "RemoveContainer" containerID="9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777" Feb 28 10:54:18 crc kubenswrapper[4996]: E0228 10:54:18.093484 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777\": container with ID starting with 9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777 not found: ID does not exist" containerID="9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777" Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.093545 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777"} err="failed to get container status \"9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777\": rpc error: code = NotFound desc = could not find container \"9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777\": container with ID starting with 9566fa6ed06de88c302e507eb8c9b953a95ff0a080b4fa12f7913010a224e777 not found: ID does not exist" Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.093573 4996 scope.go:117] "RemoveContainer" containerID="817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b" Feb 28 10:54:18 crc kubenswrapper[4996]: E0228 10:54:18.094750 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b\": container with ID starting with 817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b not found: ID does not exist" containerID="817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b" Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.094796 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b"} err="failed to get container status \"817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b\": rpc error: code = NotFound desc = could not find container \"817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b\": container with ID starting with 817b9a46b549159fcfd67ac01a7fb69e2f74d34ccfdfca561a82e9fb5eb87f4b not found: ID does not exist" Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.094818 4996 scope.go:117] "RemoveContainer" containerID="d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84" Feb 28 10:54:18 crc kubenswrapper[4996]: E0228 10:54:18.095438 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84\": container with ID starting with d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84 not found: ID does not exist" containerID="d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84" Feb 28 10:54:18 crc kubenswrapper[4996]: I0228 10:54:18.095457 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84"} err="failed to get container status \"d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84\": rpc error: code = NotFound desc = could not find container \"d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84\": container with ID starting with d6e8fd8317d76ff62b6309fb8843bd8fcf9ff8cf6f72e3fea9034201f6ddff84 not found: ID does not exist" Feb 28 10:54:19 crc kubenswrapper[4996]: I0228 10:54:19.044674 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" path="/var/lib/kubelet/pods/ab8149bd-3293-446d-9bcf-d001f1d985ce/volumes" Feb 28 10:54:50 crc kubenswrapper[4996]: I0228 10:54:50.290367 4996 scope.go:117] "RemoveContainer" containerID="93c3eb428e768cdcfcbb6f50dfa6e43a9d0fed82274f94866c2d01b76f034d2b" Feb 28 10:55:40 crc kubenswrapper[4996]: I0228 10:55:40.724203 4996 generic.go:334] "Generic (PLEG): container finished" podID="af137d17-a90e-42ea-8e73-3dba0196c670" containerID="37ccf4adad8e7601e917d0e0bbe9b0405790deb7146a77eb9734427081f2c65a" exitCode=0 Feb 28 10:55:40 crc kubenswrapper[4996]: I0228 10:55:40.724294 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"af137d17-a90e-42ea-8e73-3dba0196c670","Type":"ContainerDied","Data":"37ccf4adad8e7601e917d0e0bbe9b0405790deb7146a77eb9734427081f2c65a"} Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.137817 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170261 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ceph\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170324 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config-secret\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170359 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-workdir\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170452 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-config-data\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170478 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170513 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ca-certs\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170592 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-temporary\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170621 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq2x9\" (UniqueName: \"kubernetes.io/projected/af137d17-a90e-42ea-8e73-3dba0196c670-kube-api-access-jq2x9\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170674 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.170713 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ssh-key\") pod \"af137d17-a90e-42ea-8e73-3dba0196c670\" (UID: \"af137d17-a90e-42ea-8e73-3dba0196c670\") " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.171804 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.172740 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-config-data" (OuterVolumeSpecName: "config-data") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.177509 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af137d17-a90e-42ea-8e73-3dba0196c670-kube-api-access-jq2x9" (OuterVolumeSpecName: "kube-api-access-jq2x9") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "kube-api-access-jq2x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.177702 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ceph" (OuterVolumeSpecName: "ceph") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.185484 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.186207 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.205732 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.208689 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.227253 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.234738 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "af137d17-a90e-42ea-8e73-3dba0196c670" (UID: "af137d17-a90e-42ea-8e73-3dba0196c670"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272746 4996 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272806 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272819 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272900 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272918 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272932 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/af137d17-a90e-42ea-8e73-3dba0196c670-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272944 4996 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/af137d17-a90e-42ea-8e73-3dba0196c670-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272953 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/af137d17-a90e-42ea-8e73-3dba0196c670-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.272967 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq2x9\" (UniqueName: \"kubernetes.io/projected/af137d17-a90e-42ea-8e73-3dba0196c670-kube-api-access-jq2x9\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.273029 4996 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.298960 4996 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.374999 4996 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.746366 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"af137d17-a90e-42ea-8e73-3dba0196c670","Type":"ContainerDied","Data":"e7ff612e0b75a5ef2247e82cd24f2ac9b88e1a001ad42235a1939ab03f862a93"} Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.746673 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7ff612e0b75a5ef2247e82cd24f2ac9b88e1a001ad42235a1939ab03f862a93" Feb 28 10:55:42 crc kubenswrapper[4996]: I0228 10:55:42.746464 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.215219 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 10:55:45 crc kubenswrapper[4996]: E0228 10:55:45.216686 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa" containerName="oc" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.216715 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa" containerName="oc" Feb 28 10:55:45 crc kubenswrapper[4996]: E0228 10:55:45.216766 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="registry-server" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.216780 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="registry-server" Feb 28 10:55:45 crc kubenswrapper[4996]: E0228 10:55:45.216804 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af137d17-a90e-42ea-8e73-3dba0196c670" containerName="tempest-tests-tempest-tests-runner" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.216816 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="af137d17-a90e-42ea-8e73-3dba0196c670" containerName="tempest-tests-tempest-tests-runner" Feb 28 10:55:45 crc kubenswrapper[4996]: E0228 10:55:45.216838 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="extract-content" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.216851 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="extract-content" Feb 28 10:55:45 crc kubenswrapper[4996]: E0228 10:55:45.216874 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="extract-utilities" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.216886 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="extract-utilities" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.217261 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="af137d17-a90e-42ea-8e73-3dba0196c670" containerName="tempest-tests-tempest-tests-runner" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.217288 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa" containerName="oc" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.217314 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8149bd-3293-446d-9bcf-d001f1d985ce" containerName="registry-server" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.218496 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.221169 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-98kdl" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.247766 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.342124 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f52349d5-5dab-4972-bdb2-835cb675071f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.342310 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cb28\" (UniqueName: \"kubernetes.io/projected/f52349d5-5dab-4972-bdb2-835cb675071f-kube-api-access-9cb28\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f52349d5-5dab-4972-bdb2-835cb675071f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.445095 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cb28\" (UniqueName: \"kubernetes.io/projected/f52349d5-5dab-4972-bdb2-835cb675071f-kube-api-access-9cb28\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f52349d5-5dab-4972-bdb2-835cb675071f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.445272 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f52349d5-5dab-4972-bdb2-835cb675071f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.446128 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f52349d5-5dab-4972-bdb2-835cb675071f\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.474085 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cb28\" (UniqueName: \"kubernetes.io/projected/f52349d5-5dab-4972-bdb2-835cb675071f-kube-api-access-9cb28\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f52349d5-5dab-4972-bdb2-835cb675071f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.477306 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f52349d5-5dab-4972-bdb2-835cb675071f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:45 crc kubenswrapper[4996]: I0228 10:55:45.549967 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 10:55:46 crc kubenswrapper[4996]: I0228 10:55:46.011986 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 10:55:46 crc kubenswrapper[4996]: I0228 10:55:46.785076 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f52349d5-5dab-4972-bdb2-835cb675071f","Type":"ContainerStarted","Data":"2342f67d2920136da1bb27d03bf72fd482402aba542520201312408a5c612517"} Feb 28 10:55:47 crc kubenswrapper[4996]: I0228 10:55:47.797142 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f52349d5-5dab-4972-bdb2-835cb675071f","Type":"ContainerStarted","Data":"d8507b2373bf3ed2dec5ea1a35202b00638a237ee05590d2c36f9e62579daa19"} Feb 28 10:55:47 crc kubenswrapper[4996]: I0228 10:55:47.815857 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.037742264 podStartE2EDuration="2.815836361s" podCreationTimestamp="2026-02-28 10:55:45 +0000 UTC" firstStartedPulling="2026-02-28 10:55:46.024764532 +0000 UTC m=+6909.715567353" lastFinishedPulling="2026-02-28 10:55:46.802858639 +0000 UTC m=+6910.493661450" observedRunningTime="2026-02-28 10:55:47.810319016 +0000 UTC m=+6911.501121827" watchObservedRunningTime="2026-02-28 10:55:47.815836361 +0000 UTC m=+6911.506639172" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.143534 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537936-pqhss"] Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.145315 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537936-pqhss" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.148798 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.148796 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.149271 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.153608 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537936-pqhss"] Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.184452 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl95m\" (UniqueName: \"kubernetes.io/projected/5781ac45-2386-4e29-90e8-2d8b81401ace-kube-api-access-fl95m\") pod \"auto-csr-approver-29537936-pqhss\" (UID: \"5781ac45-2386-4e29-90e8-2d8b81401ace\") " pod="openshift-infra/auto-csr-approver-29537936-pqhss" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.285950 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl95m\" (UniqueName: \"kubernetes.io/projected/5781ac45-2386-4e29-90e8-2d8b81401ace-kube-api-access-fl95m\") pod \"auto-csr-approver-29537936-pqhss\" (UID: \"5781ac45-2386-4e29-90e8-2d8b81401ace\") " pod="openshift-infra/auto-csr-approver-29537936-pqhss" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.307077 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl95m\" (UniqueName: \"kubernetes.io/projected/5781ac45-2386-4e29-90e8-2d8b81401ace-kube-api-access-fl95m\") pod \"auto-csr-approver-29537936-pqhss\" (UID: \"5781ac45-2386-4e29-90e8-2d8b81401ace\") " pod="openshift-infra/auto-csr-approver-29537936-pqhss" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.461246 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537936-pqhss" Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.930916 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537936-pqhss"] Feb 28 10:56:00 crc kubenswrapper[4996]: I0228 10:56:00.968117 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537936-pqhss" event={"ID":"5781ac45-2386-4e29-90e8-2d8b81401ace","Type":"ContainerStarted","Data":"e2322873264a71c74102426fc84cd7eff9408e2b365776674693816cd9e19f4b"} Feb 28 10:56:02 crc kubenswrapper[4996]: I0228 10:56:02.988167 4996 generic.go:334] "Generic (PLEG): container finished" podID="5781ac45-2386-4e29-90e8-2d8b81401ace" containerID="407096447e3af8433244879b3c40c90a9681311534b577e80928bb6b6e53bc09" exitCode=0 Feb 28 10:56:02 crc kubenswrapper[4996]: I0228 10:56:02.988662 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537936-pqhss" event={"ID":"5781ac45-2386-4e29-90e8-2d8b81401ace","Type":"ContainerDied","Data":"407096447e3af8433244879b3c40c90a9681311534b577e80928bb6b6e53bc09"} Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.091610 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.092984 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.096173 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-private-key" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.096221 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-public-key" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.096250 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tobiko-tests-tobikotobiko-config" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.096291 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.096411 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"tobiko-secret" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.124912 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.161979 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162049 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162088 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162108 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c2dl\" (UniqueName: \"kubernetes.io/projected/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kube-api-access-7c2dl\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162136 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162164 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162185 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162224 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162363 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162423 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162530 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.162684 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.264821 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.264885 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.264927 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.264972 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.264993 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c2dl\" (UniqueName: \"kubernetes.io/projected/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kube-api-access-7c2dl\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.265034 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.265062 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.265082 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.265110 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.265127 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.265149 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.265191 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.265526 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.266649 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.269669 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.273707 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-public-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.274403 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-private-key\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.274552 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.274713 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.276318 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kubeconfig\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.276839 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ceph\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.289000 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-openstack-config-secret\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.296556 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c2dl\" (UniqueName: \"kubernetes.io/projected/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kube-api-access-7c2dl\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.301978 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ca-certs\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.308648 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tobiko-tests-tobiko-s00-podified-functional\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.426584 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.936983 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s00-podified-functional"] Feb 28 10:56:03 crc kubenswrapper[4996]: W0228 10:56:03.937058 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab8d5e2_9d4f_4ba7_a2e7_4c01cf277979.slice/crio-65d358ed2db8fc7e84347ccd2751fca4f7ef4399cbc3409476927f58888965f1 WatchSource:0}: Error finding container 65d358ed2db8fc7e84347ccd2751fca4f7ef4399cbc3409476927f58888965f1: Status 404 returned error can't find the container with id 65d358ed2db8fc7e84347ccd2751fca4f7ef4399cbc3409476927f58888965f1 Feb 28 10:56:03 crc kubenswrapper[4996]: I0228 10:56:03.999082 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979","Type":"ContainerStarted","Data":"65d358ed2db8fc7e84347ccd2751fca4f7ef4399cbc3409476927f58888965f1"} Feb 28 10:56:04 crc kubenswrapper[4996]: I0228 10:56:04.232653 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537936-pqhss" Feb 28 10:56:04 crc kubenswrapper[4996]: I0228 10:56:04.293591 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl95m\" (UniqueName: \"kubernetes.io/projected/5781ac45-2386-4e29-90e8-2d8b81401ace-kube-api-access-fl95m\") pod \"5781ac45-2386-4e29-90e8-2d8b81401ace\" (UID: \"5781ac45-2386-4e29-90e8-2d8b81401ace\") " Feb 28 10:56:04 crc kubenswrapper[4996]: I0228 10:56:04.300723 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5781ac45-2386-4e29-90e8-2d8b81401ace-kube-api-access-fl95m" (OuterVolumeSpecName: "kube-api-access-fl95m") pod "5781ac45-2386-4e29-90e8-2d8b81401ace" (UID: "5781ac45-2386-4e29-90e8-2d8b81401ace"). InnerVolumeSpecName "kube-api-access-fl95m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:56:04 crc kubenswrapper[4996]: I0228 10:56:04.395743 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl95m\" (UniqueName: \"kubernetes.io/projected/5781ac45-2386-4e29-90e8-2d8b81401ace-kube-api-access-fl95m\") on node \"crc\" DevicePath \"\"" Feb 28 10:56:05 crc kubenswrapper[4996]: I0228 10:56:05.021385 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537936-pqhss" event={"ID":"5781ac45-2386-4e29-90e8-2d8b81401ace","Type":"ContainerDied","Data":"e2322873264a71c74102426fc84cd7eff9408e2b365776674693816cd9e19f4b"} Feb 28 10:56:05 crc kubenswrapper[4996]: I0228 10:56:05.021665 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2322873264a71c74102426fc84cd7eff9408e2b365776674693816cd9e19f4b" Feb 28 10:56:05 crc kubenswrapper[4996]: I0228 10:56:05.021494 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537936-pqhss" Feb 28 10:56:05 crc kubenswrapper[4996]: I0228 10:56:05.307061 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537930-4m2px"] Feb 28 10:56:05 crc kubenswrapper[4996]: I0228 10:56:05.332960 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537930-4m2px"] Feb 28 10:56:07 crc kubenswrapper[4996]: I0228 10:56:07.048899 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21b44dcf-9b64-423d-9b23-7f5ead262ae4" path="/var/lib/kubelet/pods/21b44dcf-9b64-423d-9b23-7f5ead262ae4/volumes" Feb 28 10:56:12 crc kubenswrapper[4996]: I0228 10:56:12.249264 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:56:12 crc kubenswrapper[4996]: I0228 10:56:12.250236 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:56:25 crc kubenswrapper[4996]: E0228 10:56:25.385348 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tobiko:current-podified" Feb 28 10:56:25 crc kubenswrapper[4996]: E0228 10:56:25.386073 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tobiko-tests-tobiko,Image:quay.io/podified-antelope-centos9/openstack-tobiko:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TOBIKO_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:TOBIKO_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:TOBIKO_LOGS_DIR_NAME,Value:tobiko-tests-tobiko-s00-podified-functional,ValueFrom:nil,},EnvVar{Name:TOBIKO_PREVENT_CREATE,Value:,ValueFrom:nil,},EnvVar{Name:TOBIKO_PYTEST_ADDOPTS,Value:,ValueFrom:nil,},EnvVar{Name:TOBIKO_TESTENV,Value:functional -- tobiko/tests/functional/podified/test_topology.py,ValueFrom:nil,},EnvVar{Name:TOBIKO_VERSION,Value:master,ValueFrom:nil,},EnvVar{Name:TOX_NUM_PROCESSES,Value:2,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{8 0} {} 8 DecimalSI},memory: {{8589934592 0} {} BinarySI},},Requests:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tobiko,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tobiko/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/tobiko/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-config,ReadOnly:false,MountPath:/etc/tobiko/tobiko.conf,SubPath:tobiko.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-private-key,ReadOnly:true,MountPath:/etc/test_operator/id_ecdsa,SubPath:id_ecdsa,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tobiko-public-key,ReadOnly:true,MountPath:/etc/test_operator/id_ecdsa.pub,SubPath:id_ecdsa.pub,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/var/lib/tobiko/.kube/config,SubPath:config,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7c2dl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42495,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42495,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tobiko-tests-tobiko-s00-podified-functional_openstack(8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 10:56:25 crc kubenswrapper[4996]: E0228 10:56:25.387300 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tobiko-tests-tobiko\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podUID="8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" Feb 28 10:56:26 crc kubenswrapper[4996]: E0228 10:56:26.255300 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tobiko-tests-tobiko\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tobiko:current-podified\\\"\"" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podUID="8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" Feb 28 10:56:38 crc kubenswrapper[4996]: I0228 10:56:38.349242 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979","Type":"ContainerStarted","Data":"2cc66e60d2955cbae90ca17120139f9bb44b5991c980357181ce2acad0e8a7da"} Feb 28 10:56:38 crc kubenswrapper[4996]: I0228 10:56:38.374500 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" podStartSLOduration=2.885775152 podStartE2EDuration="36.374481481s" podCreationTimestamp="2026-02-28 10:56:02 +0000 UTC" firstStartedPulling="2026-02-28 10:56:03.938738046 +0000 UTC m=+6927.629540857" lastFinishedPulling="2026-02-28 10:56:37.427444375 +0000 UTC m=+6961.118247186" observedRunningTime="2026-02-28 10:56:38.368593416 +0000 UTC m=+6962.059396237" watchObservedRunningTime="2026-02-28 10:56:38.374481481 +0000 UTC m=+6962.065284292" Feb 28 10:56:42 crc kubenswrapper[4996]: I0228 10:56:42.248868 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:56:42 crc kubenswrapper[4996]: I0228 10:56:42.249470 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:56:50 crc kubenswrapper[4996]: I0228 10:56:50.433390 4996 scope.go:117] "RemoveContainer" containerID="b782dbdf94e2f00cbe01fc35f5b46ef1303286421357d71d0dd01196e5664ca6" Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.249152 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.249608 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.249654 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.250513 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d7749c606980be590cdf9e1ba9745d9acb89d5faee0102da2aab1679aa836ba"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.250582 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://2d7749c606980be590cdf9e1ba9745d9acb89d5faee0102da2aab1679aa836ba" gracePeriod=600 Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.721183 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="2d7749c606980be590cdf9e1ba9745d9acb89d5faee0102da2aab1679aa836ba" exitCode=0 Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.721233 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"2d7749c606980be590cdf9e1ba9745d9acb89d5faee0102da2aab1679aa836ba"} Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.721730 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e"} Feb 28 10:57:12 crc kubenswrapper[4996]: I0228 10:57:12.721769 4996 scope.go:117] "RemoveContainer" containerID="37da45d2295cc44fc42b11594a4d69917bd63fdb0720b4550b2338520d9b88c7" Feb 28 10:57:38 crc kubenswrapper[4996]: I0228 10:57:38.964449 4996 generic.go:334] "Generic (PLEG): container finished" podID="8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" containerID="2cc66e60d2955cbae90ca17120139f9bb44b5991c980357181ce2acad0e8a7da" exitCode=0 Feb 28 10:57:38 crc kubenswrapper[4996]: I0228 10:57:38.964575 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979","Type":"ContainerDied","Data":"2cc66e60d2955cbae90ca17120139f9bb44b5991c980357181ce2acad0e8a7da"} Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.441375 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549246 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ca-certs\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549290 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kubeconfig\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549317 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-workdir\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549346 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-openstack-config-secret\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549380 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-temporary\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549413 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-config\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549437 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549452 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-clouds-config\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549543 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-public-key\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549573 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-private-key\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549610 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c2dl\" (UniqueName: \"kubernetes.io/projected/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kube-api-access-7c2dl\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.549667 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ceph\") pod \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\" (UID: \"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979\") " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.550368 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.561222 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ceph" (OuterVolumeSpecName: "ceph") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.562777 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Feb 28 10:57:40 crc kubenswrapper[4996]: E0228 10:57:40.563212 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" containerName="tobiko-tests-tobiko" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.563228 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" containerName="tobiko-tests-tobiko" Feb 28 10:57:40 crc kubenswrapper[4996]: E0228 10:57:40.563245 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5781ac45-2386-4e29-90e8-2d8b81401ace" containerName="oc" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.563251 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5781ac45-2386-4e29-90e8-2d8b81401ace" containerName="oc" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.563411 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5781ac45-2386-4e29-90e8-2d8b81401ace" containerName="oc" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.563436 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" containerName="tobiko-tests-tobiko" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.564055 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.569409 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.587964 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kube-api-access-7c2dl" (OuterVolumeSpecName: "kube-api-access-7c2dl") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "kube-api-access-7c2dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.590990 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.602307 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.605592 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.619735 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.623618 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.629827 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.637187 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.645212 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.651587 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.651695 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.651824 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652347 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652446 4996 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-config\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652510 4996 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652532 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652583 4996 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652615 4996 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652684 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c2dl\" (UniqueName: \"kubernetes.io/projected/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kube-api-access-7c2dl\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652702 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652714 4996 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652728 4996 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-kubeconfig\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.652766 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.678379 4996 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.754997 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755163 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755181 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755202 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cvqj\" (UniqueName: \"kubernetes.io/projected/e548eb85-b67c-4520-80ff-88f65e118673-kube-api-access-2cvqj\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755270 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755346 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755382 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755415 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755502 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755573 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755607 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755735 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.755825 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.756289 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-clouds-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.756868 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-config\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.759815 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-openstack-config-secret\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.782179 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.857737 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cvqj\" (UniqueName: \"kubernetes.io/projected/e548eb85-b67c-4520-80ff-88f65e118673-kube-api-access-2cvqj\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.857993 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.858303 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.858380 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.858437 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.858483 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.858562 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.858762 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.859130 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-workdir\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.859396 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-temporary\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.859674 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-public-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.861116 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-private-key\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.861173 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-kubeconfig\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.862583 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ceph\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.862950 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ca-certs\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.874595 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cvqj\" (UniqueName: \"kubernetes.io/projected/e548eb85-b67c-4520-80ff-88f65e118673-kube-api-access-2cvqj\") pod \"tobiko-tests-tobiko-s01-sanity\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.980971 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" event={"ID":"8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979","Type":"ContainerDied","Data":"65d358ed2db8fc7e84347ccd2751fca4f7ef4399cbc3409476927f58888965f1"} Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.981024 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65d358ed2db8fc7e84347ccd2751fca4f7ef4399cbc3409476927f58888965f1" Feb 28 10:57:40 crc kubenswrapper[4996]: I0228 10:57:40.981062 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s00-podified-functional" Feb 28 10:57:41 crc kubenswrapper[4996]: I0228 10:57:41.022585 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:57:41 crc kubenswrapper[4996]: I0228 10:57:41.565766 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tobiko-tests-tobiko-s01-sanity"] Feb 28 10:57:41 crc kubenswrapper[4996]: I0228 10:57:41.940821 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979" (UID: "8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:57:41 crc kubenswrapper[4996]: I0228 10:57:41.983986 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 10:57:41 crc kubenswrapper[4996]: I0228 10:57:41.999545 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"e548eb85-b67c-4520-80ff-88f65e118673","Type":"ContainerStarted","Data":"b0c896c68a18faa46586d89ff277dc914800397a7a40e27e06dbd32e00d80438"} Feb 28 10:57:43 crc kubenswrapper[4996]: I0228 10:57:43.009943 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"e548eb85-b67c-4520-80ff-88f65e118673","Type":"ContainerStarted","Data":"a4bbf20afc907441d48e1bccf1e6165860283913195afdda1a5c0ced2cc12442"} Feb 28 10:57:43 crc kubenswrapper[4996]: I0228 10:57:43.036460 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tobiko-tests-tobiko-s01-sanity" podStartSLOduration=3.036441328 podStartE2EDuration="3.036441328s" podCreationTimestamp="2026-02-28 10:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 10:57:43.031576058 +0000 UTC m=+7026.722378879" watchObservedRunningTime="2026-02-28 10:57:43.036441328 +0000 UTC m=+7026.727244139" Feb 28 10:57:50 crc kubenswrapper[4996]: I0228 10:57:50.516330 4996 scope.go:117] "RemoveContainer" containerID="c84b5957f2d35dd9c22032b27cc3022f3ad88bfbf916134a9a731fcc1e06dfd6" Feb 28 10:57:50 crc kubenswrapper[4996]: I0228 10:57:50.552361 4996 scope.go:117] "RemoveContainer" containerID="70d80e6c1991865bd8da2f2fec418221bde5229961106b59c029991a61cac683" Feb 28 10:57:50 crc kubenswrapper[4996]: I0228 10:57:50.606997 4996 scope.go:117] "RemoveContainer" containerID="3196e7ebdee521cd39eaa96d817f8d18988984c9bdaec4e60d6242a525ee471d" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.138176 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537938-8jb8v"] Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.140346 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537938-8jb8v" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.143439 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.143861 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.145291 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.167738 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537938-8jb8v"] Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.168489 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-594wl\" (UniqueName: \"kubernetes.io/projected/cbc34363-0648-4598-b3e5-eb546f7d5bf2-kube-api-access-594wl\") pod \"auto-csr-approver-29537938-8jb8v\" (UID: \"cbc34363-0648-4598-b3e5-eb546f7d5bf2\") " pod="openshift-infra/auto-csr-approver-29537938-8jb8v" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.270260 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-594wl\" (UniqueName: \"kubernetes.io/projected/cbc34363-0648-4598-b3e5-eb546f7d5bf2-kube-api-access-594wl\") pod \"auto-csr-approver-29537938-8jb8v\" (UID: \"cbc34363-0648-4598-b3e5-eb546f7d5bf2\") " pod="openshift-infra/auto-csr-approver-29537938-8jb8v" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.295707 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-594wl\" (UniqueName: \"kubernetes.io/projected/cbc34363-0648-4598-b3e5-eb546f7d5bf2-kube-api-access-594wl\") pod \"auto-csr-approver-29537938-8jb8v\" (UID: \"cbc34363-0648-4598-b3e5-eb546f7d5bf2\") " pod="openshift-infra/auto-csr-approver-29537938-8jb8v" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.466402 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537938-8jb8v" Feb 28 10:58:00 crc kubenswrapper[4996]: I0228 10:58:00.963469 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537938-8jb8v"] Feb 28 10:58:01 crc kubenswrapper[4996]: I0228 10:58:01.164655 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537938-8jb8v" event={"ID":"cbc34363-0648-4598-b3e5-eb546f7d5bf2","Type":"ContainerStarted","Data":"70e68b3f433768fd0a3bdc1c7b0a14cbd013acb67268522778296eed08370586"} Feb 28 10:58:03 crc kubenswrapper[4996]: I0228 10:58:03.185520 4996 generic.go:334] "Generic (PLEG): container finished" podID="cbc34363-0648-4598-b3e5-eb546f7d5bf2" containerID="3b5932a235a3cb5a50566b06215a0ab9c9b1502dfb526fc41b3b6c04f45a0d66" exitCode=0 Feb 28 10:58:03 crc kubenswrapper[4996]: I0228 10:58:03.186020 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537938-8jb8v" event={"ID":"cbc34363-0648-4598-b3e5-eb546f7d5bf2","Type":"ContainerDied","Data":"3b5932a235a3cb5a50566b06215a0ab9c9b1502dfb526fc41b3b6c04f45a0d66"} Feb 28 10:58:04 crc kubenswrapper[4996]: I0228 10:58:04.515916 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537938-8jb8v" Feb 28 10:58:04 crc kubenswrapper[4996]: I0228 10:58:04.660737 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-594wl\" (UniqueName: \"kubernetes.io/projected/cbc34363-0648-4598-b3e5-eb546f7d5bf2-kube-api-access-594wl\") pod \"cbc34363-0648-4598-b3e5-eb546f7d5bf2\" (UID: \"cbc34363-0648-4598-b3e5-eb546f7d5bf2\") " Feb 28 10:58:04 crc kubenswrapper[4996]: I0228 10:58:04.669841 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbc34363-0648-4598-b3e5-eb546f7d5bf2-kube-api-access-594wl" (OuterVolumeSpecName: "kube-api-access-594wl") pod "cbc34363-0648-4598-b3e5-eb546f7d5bf2" (UID: "cbc34363-0648-4598-b3e5-eb546f7d5bf2"). InnerVolumeSpecName "kube-api-access-594wl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:58:04 crc kubenswrapper[4996]: I0228 10:58:04.763181 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-594wl\" (UniqueName: \"kubernetes.io/projected/cbc34363-0648-4598-b3e5-eb546f7d5bf2-kube-api-access-594wl\") on node \"crc\" DevicePath \"\"" Feb 28 10:58:05 crc kubenswrapper[4996]: I0228 10:58:05.205222 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537938-8jb8v" event={"ID":"cbc34363-0648-4598-b3e5-eb546f7d5bf2","Type":"ContainerDied","Data":"70e68b3f433768fd0a3bdc1c7b0a14cbd013acb67268522778296eed08370586"} Feb 28 10:58:05 crc kubenswrapper[4996]: I0228 10:58:05.205261 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537938-8jb8v" Feb 28 10:58:05 crc kubenswrapper[4996]: I0228 10:58:05.205268 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70e68b3f433768fd0a3bdc1c7b0a14cbd013acb67268522778296eed08370586" Feb 28 10:58:05 crc kubenswrapper[4996]: I0228 10:58:05.602640 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537932-5qbbl"] Feb 28 10:58:05 crc kubenswrapper[4996]: I0228 10:58:05.621380 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537932-5qbbl"] Feb 28 10:58:07 crc kubenswrapper[4996]: I0228 10:58:07.074251 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f05a25-8a05-441e-afe0-2293b23f69c8" path="/var/lib/kubelet/pods/42f05a25-8a05-441e-afe0-2293b23f69c8/volumes" Feb 28 10:58:50 crc kubenswrapper[4996]: I0228 10:58:50.674175 4996 scope.go:117] "RemoveContainer" containerID="d0450795667c8db77d886bb407eb7a6329072eec12810280384d2e51d3f030a8" Feb 28 10:59:05 crc kubenswrapper[4996]: I0228 10:59:05.785199 4996 generic.go:334] "Generic (PLEG): container finished" podID="e548eb85-b67c-4520-80ff-88f65e118673" containerID="a4bbf20afc907441d48e1bccf1e6165860283913195afdda1a5c0ced2cc12442" exitCode=0 Feb 28 10:59:05 crc kubenswrapper[4996]: I0228 10:59:05.785736 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"e548eb85-b67c-4520-80ff-88f65e118673","Type":"ContainerDied","Data":"a4bbf20afc907441d48e1bccf1e6165860283913195afdda1a5c0ced2cc12442"} Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.212449 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.353898 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-temporary\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.353949 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ceph\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.353979 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-config\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354032 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ca-certs\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354081 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-workdir\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354127 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-public-key\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354214 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-openstack-config-secret\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354245 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cvqj\" (UniqueName: \"kubernetes.io/projected/e548eb85-b67c-4520-80ff-88f65e118673-kube-api-access-2cvqj\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354295 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354318 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-kubeconfig\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354423 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-private-key\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.354511 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-clouds-config\") pod \"e548eb85-b67c-4520-80ff-88f65e118673\" (UID: \"e548eb85-b67c-4520-80ff-88f65e118673\") " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.356766 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.362358 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e548eb85-b67c-4520-80ff-88f65e118673-kube-api-access-2cvqj" (OuterVolumeSpecName: "kube-api-access-2cvqj") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "kube-api-access-2cvqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.366631 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ceph" (OuterVolumeSpecName: "ceph") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.375968 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.380542 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-private-key" (OuterVolumeSpecName: "tobiko-private-key") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "tobiko-private-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.386194 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.392398 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-public-key" (OuterVolumeSpecName: "tobiko-public-key") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "tobiko-public-key". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.401777 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-config" (OuterVolumeSpecName: "tobiko-config") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "tobiko-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.417929 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.430828 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.440575 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457427 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457465 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cvqj\" (UniqueName: \"kubernetes.io/projected/e548eb85-b67c-4520-80ff-88f65e118673-kube-api-access-2cvqj\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457501 4996 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457516 4996 reconciler_common.go:293] "Volume detached for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-kubeconfig\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457529 4996 reconciler_common.go:293] "Volume detached for volume \"tobiko-private-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-private-key\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457542 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457554 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457568 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457580 4996 reconciler_common.go:293] "Volume detached for volume \"tobiko-config\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-config\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457592 4996 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e548eb85-b67c-4520-80ff-88f65e118673-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.457605 4996 reconciler_common.go:293] "Volume detached for volume \"tobiko-public-key\" (UniqueName: \"kubernetes.io/configmap/e548eb85-b67c-4520-80ff-88f65e118673-tobiko-public-key\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.478436 4996 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.559716 4996 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.804881 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tobiko-tests-tobiko-s01-sanity" event={"ID":"e548eb85-b67c-4520-80ff-88f65e118673","Type":"ContainerDied","Data":"b0c896c68a18faa46586d89ff277dc914800397a7a40e27e06dbd32e00d80438"} Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.805252 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0c896c68a18faa46586d89ff277dc914800397a7a40e27e06dbd32e00d80438" Feb 28 10:59:07 crc kubenswrapper[4996]: I0228 10:59:07.804922 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tobiko-tests-tobiko-s01-sanity" Feb 28 10:59:08 crc kubenswrapper[4996]: I0228 10:59:08.525683 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e548eb85-b67c-4520-80ff-88f65e118673" (UID: "e548eb85-b67c-4520-80ff-88f65e118673"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 10:59:08 crc kubenswrapper[4996]: I0228 10:59:08.583698 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e548eb85-b67c-4520-80ff-88f65e118673-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 10:59:08 crc kubenswrapper[4996]: E0228 10:59:08.808318 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode548eb85_b67c_4520_80ff_88f65e118673.slice\": RecentStats: unable to find data in memory cache]" Feb 28 10:59:12 crc kubenswrapper[4996]: I0228 10:59:12.248651 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:59:12 crc kubenswrapper[4996]: I0228 10:59:12.249193 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.621258 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Feb 28 10:59:16 crc kubenswrapper[4996]: E0228 10:59:16.622161 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e548eb85-b67c-4520-80ff-88f65e118673" containerName="tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.622179 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="e548eb85-b67c-4520-80ff-88f65e118673" containerName="tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: E0228 10:59:16.622228 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbc34363-0648-4598-b3e5-eb546f7d5bf2" containerName="oc" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.622249 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbc34363-0648-4598-b3e5-eb546f7d5bf2" containerName="oc" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.622481 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbc34363-0648-4598-b3e5-eb546f7d5bf2" containerName="oc" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.622523 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="e548eb85-b67c-4520-80ff-88f65e118673" containerName="tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.623393 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.631390 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.750488 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhmtc\" (UniqueName: \"kubernetes.io/projected/f624dd26-b398-4f25-b94e-74a5560432a8-kube-api-access-hhmtc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"f624dd26-b398-4f25-b94e-74a5560432a8\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.750659 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"f624dd26-b398-4f25-b94e-74a5560432a8\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.853526 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhmtc\" (UniqueName: \"kubernetes.io/projected/f624dd26-b398-4f25-b94e-74a5560432a8-kube-api-access-hhmtc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"f624dd26-b398-4f25-b94e-74a5560432a8\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.853783 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"f624dd26-b398-4f25-b94e-74a5560432a8\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.854333 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"f624dd26-b398-4f25-b94e-74a5560432a8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.873742 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhmtc\" (UniqueName: \"kubernetes.io/projected/f624dd26-b398-4f25-b94e-74a5560432a8-kube-api-access-hhmtc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"f624dd26-b398-4f25-b94e-74a5560432a8\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.882503 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tobiko-tobiko-tests-tobiko\" (UID: \"f624dd26-b398-4f25-b94e-74a5560432a8\") " pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:16 crc kubenswrapper[4996]: I0228 10:59:16.989997 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" Feb 28 10:59:17 crc kubenswrapper[4996]: I0228 10:59:17.411641 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko"] Feb 28 10:59:17 crc kubenswrapper[4996]: I0228 10:59:17.417718 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 10:59:17 crc kubenswrapper[4996]: I0228 10:59:17.912385 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"f624dd26-b398-4f25-b94e-74a5560432a8","Type":"ContainerStarted","Data":"752346adc2cb9d8fe790a89244f5a0fbf1ff857cef33c7c63ca20767617576ea"} Feb 28 10:59:18 crc kubenswrapper[4996]: I0228 10:59:18.924677 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" event={"ID":"f624dd26-b398-4f25-b94e-74a5560432a8","Type":"ContainerStarted","Data":"7652647d9a8eba783456a7be1b725156779f22becd5db35cbab6667f51e07591"} Feb 28 10:59:18 crc kubenswrapper[4996]: I0228 10:59:18.946626 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tobiko-tobiko-tests-tobiko" podStartSLOduration=2.368680455 podStartE2EDuration="2.946594537s" podCreationTimestamp="2026-02-28 10:59:16 +0000 UTC" firstStartedPulling="2026-02-28 10:59:17.417501828 +0000 UTC m=+7121.108304639" lastFinishedPulling="2026-02-28 10:59:17.99541591 +0000 UTC m=+7121.686218721" observedRunningTime="2026-02-28 10:59:18.939996105 +0000 UTC m=+7122.630798916" watchObservedRunningTime="2026-02-28 10:59:18.946594537 +0000 UTC m=+7122.637397348" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.534310 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ansibletest-ansibletest"] Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.536721 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.539455 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.540841 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.554395 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647563 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hmpd\" (UniqueName: \"kubernetes.io/projected/f6886487-0ab2-404d-aa70-4be59320885a-kube-api-access-9hmpd\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647620 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647661 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647771 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647810 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647847 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647888 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ceph\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647936 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.647975 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.648051 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750190 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750254 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750289 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750322 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hmpd\" (UniqueName: \"kubernetes.io/projected/f6886487-0ab2-404d-aa70-4be59320885a-kube-api-access-9hmpd\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750340 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750362 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750463 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750488 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750513 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750540 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ceph\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.750887 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-temporary\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.751164 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-workdir\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.751484 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.751635 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.756676 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ceph\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.756829 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.757262 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ca-certs\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.760784 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-workload-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.764775 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-compute-ssh-secret\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.768701 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hmpd\" (UniqueName: \"kubernetes.io/projected/f6886487-0ab2-404d-aa70-4be59320885a-kube-api-access-9hmpd\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.787900 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ansibletest-ansibletest\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " pod="openstack/ansibletest-ansibletest" Feb 28 10:59:30 crc kubenswrapper[4996]: I0228 10:59:30.881540 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Feb 28 10:59:31 crc kubenswrapper[4996]: I0228 10:59:31.331774 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ansibletest-ansibletest"] Feb 28 10:59:32 crc kubenswrapper[4996]: I0228 10:59:32.042884 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"f6886487-0ab2-404d-aa70-4be59320885a","Type":"ContainerStarted","Data":"f22745281efb317ffcc138861bfe7e8e84feec3cd9de549c94fbae4c78317a92"} Feb 28 10:59:42 crc kubenswrapper[4996]: I0228 10:59:42.249315 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 10:59:42 crc kubenswrapper[4996]: I0228 10:59:42.249880 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 10:59:49 crc kubenswrapper[4996]: E0228 10:59:49.264391 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified" Feb 28 10:59:49 crc kubenswrapper[4996]: E0228 10:59:49.265327 4996 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 10:59:49 crc kubenswrapper[4996]: container &Container{Name:ansibletest-ansibletest,Image:quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:POD_ANSIBLE_EXTRA_VARS,Value:-e manual_run=false,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_FILE_EXTRA_VARS,Value:--- Feb 28 10:59:49 crc kubenswrapper[4996]: foo: bar Feb 28 10:59:49 crc kubenswrapper[4996]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_BRANCH,Value:,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_GIT_REPO,Value:https://github.com/ansible/test-playbooks,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_INVENTORY,Value:localhost ansible_connection=local ansible_python_interpreter=python3 Feb 28 10:59:49 crc kubenswrapper[4996]: ,ValueFrom:nil,},EnvVar{Name:POD_ANSIBLE_PLAYBOOK,Value:./debug.yml,ValueFrom:nil,},EnvVar{Name:POD_DEBUG,Value:false,ValueFrom:nil,},EnvVar{Name:POD_INSTALL_COLLECTIONS,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{4 0} {} 4 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/ansible,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/AnsibleTests/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/ansible/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/var/lib/ansible/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:compute-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/.ssh/compute_id,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:workload-ssh-secret,ReadOnly:true,MountPath:/var/lib/ansible/test_keypair.key,SubPath:ssh-privatekey,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9hmpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*227,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*227,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ansibletest-ansibletest_openstack(f6886487-0ab2-404d-aa70-4be59320885a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Feb 28 10:59:49 crc kubenswrapper[4996]: > logger="UnhandledError" Feb 28 10:59:49 crc kubenswrapper[4996]: E0228 10:59:49.266531 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ansibletest-ansibletest" podUID="f6886487-0ab2-404d-aa70-4be59320885a" Feb 28 10:59:50 crc kubenswrapper[4996]: E0228 10:59:50.197055 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ansibletest-ansibletest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ansible-tests:current-podified\\\"\"" pod="openstack/ansibletest-ansibletest" podUID="f6886487-0ab2-404d-aa70-4be59320885a" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.146337 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537940-pw9mp"] Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.150820 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.153323 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.153420 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.155134 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.159710 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw"] Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.161143 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.163155 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.163656 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.168263 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537940-pw9mp"] Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.190966 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw"] Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.276470 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-secret-volume\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.276559 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt7wj\" (UniqueName: \"kubernetes.io/projected/a9b07b45-80e5-48e7-a828-37120ebe17c4-kube-api-access-dt7wj\") pod \"auto-csr-approver-29537940-pw9mp\" (UID: \"a9b07b45-80e5-48e7-a828-37120ebe17c4\") " pod="openshift-infra/auto-csr-approver-29537940-pw9mp" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.276767 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gtsj\" (UniqueName: \"kubernetes.io/projected/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-kube-api-access-2gtsj\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.276876 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-config-volume\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.379190 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt7wj\" (UniqueName: \"kubernetes.io/projected/a9b07b45-80e5-48e7-a828-37120ebe17c4-kube-api-access-dt7wj\") pod \"auto-csr-approver-29537940-pw9mp\" (UID: \"a9b07b45-80e5-48e7-a828-37120ebe17c4\") " pod="openshift-infra/auto-csr-approver-29537940-pw9mp" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.379321 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gtsj\" (UniqueName: \"kubernetes.io/projected/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-kube-api-access-2gtsj\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.379383 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-config-volume\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.379473 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-secret-volume\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.380432 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-config-volume\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.389055 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-secret-volume\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.401626 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt7wj\" (UniqueName: \"kubernetes.io/projected/a9b07b45-80e5-48e7-a828-37120ebe17c4-kube-api-access-dt7wj\") pod \"auto-csr-approver-29537940-pw9mp\" (UID: \"a9b07b45-80e5-48e7-a828-37120ebe17c4\") " pod="openshift-infra/auto-csr-approver-29537940-pw9mp" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.404254 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gtsj\" (UniqueName: \"kubernetes.io/projected/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-kube-api-access-2gtsj\") pod \"collect-profiles-29537940-z4mdw\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.475077 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.485729 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:00 crc kubenswrapper[4996]: I0228 11:00:00.930384 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537940-pw9mp"] Feb 28 11:00:01 crc kubenswrapper[4996]: W0228 11:00:01.041181 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea91fa8e_4683_45b9_b108_6b7f7c3aae28.slice/crio-aad0936970cbc2cfc36da0bec5d989d58febc0b2b15942646951ffa6110634d1 WatchSource:0}: Error finding container aad0936970cbc2cfc36da0bec5d989d58febc0b2b15942646951ffa6110634d1: Status 404 returned error can't find the container with id aad0936970cbc2cfc36da0bec5d989d58febc0b2b15942646951ffa6110634d1 Feb 28 11:00:01 crc kubenswrapper[4996]: I0228 11:00:01.045358 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw"] Feb 28 11:00:01 crc kubenswrapper[4996]: I0228 11:00:01.308191 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" event={"ID":"a9b07b45-80e5-48e7-a828-37120ebe17c4","Type":"ContainerStarted","Data":"37ab0f0181dd205584df4742efa9502033e28dcbd85e57b6c02d59fc07fa8577"} Feb 28 11:00:01 crc kubenswrapper[4996]: I0228 11:00:01.309733 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" event={"ID":"ea91fa8e-4683-45b9-b108-6b7f7c3aae28","Type":"ContainerStarted","Data":"b05d62bedf5b78d7e276dccc7a41e4b7df613fc75876a999ae82a463f0e82ed0"} Feb 28 11:00:01 crc kubenswrapper[4996]: I0228 11:00:01.309791 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" event={"ID":"ea91fa8e-4683-45b9-b108-6b7f7c3aae28","Type":"ContainerStarted","Data":"aad0936970cbc2cfc36da0bec5d989d58febc0b2b15942646951ffa6110634d1"} Feb 28 11:00:01 crc kubenswrapper[4996]: I0228 11:00:01.332944 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" podStartSLOduration=1.332924589 podStartE2EDuration="1.332924589s" podCreationTimestamp="2026-02-28 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 11:00:01.325240201 +0000 UTC m=+7165.016043022" watchObservedRunningTime="2026-02-28 11:00:01.332924589 +0000 UTC m=+7165.023727400" Feb 28 11:00:02 crc kubenswrapper[4996]: I0228 11:00:02.322559 4996 generic.go:334] "Generic (PLEG): container finished" podID="ea91fa8e-4683-45b9-b108-6b7f7c3aae28" containerID="b05d62bedf5b78d7e276dccc7a41e4b7df613fc75876a999ae82a463f0e82ed0" exitCode=0 Feb 28 11:00:02 crc kubenswrapper[4996]: I0228 11:00:02.322646 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" event={"ID":"ea91fa8e-4683-45b9-b108-6b7f7c3aae28","Type":"ContainerDied","Data":"b05d62bedf5b78d7e276dccc7a41e4b7df613fc75876a999ae82a463f0e82ed0"} Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.679624 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.775906 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gtsj\" (UniqueName: \"kubernetes.io/projected/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-kube-api-access-2gtsj\") pod \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.776336 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-config-volume\") pod \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.776541 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-secret-volume\") pod \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\" (UID: \"ea91fa8e-4683-45b9-b108-6b7f7c3aae28\") " Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.777329 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea91fa8e-4683-45b9-b108-6b7f7c3aae28" (UID: "ea91fa8e-4683-45b9-b108-6b7f7c3aae28"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.783375 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-kube-api-access-2gtsj" (OuterVolumeSpecName: "kube-api-access-2gtsj") pod "ea91fa8e-4683-45b9-b108-6b7f7c3aae28" (UID: "ea91fa8e-4683-45b9-b108-6b7f7c3aae28"). InnerVolumeSpecName "kube-api-access-2gtsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.783668 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea91fa8e-4683-45b9-b108-6b7f7c3aae28" (UID: "ea91fa8e-4683-45b9-b108-6b7f7c3aae28"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.879015 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gtsj\" (UniqueName: \"kubernetes.io/projected/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-kube-api-access-2gtsj\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.879056 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:03 crc kubenswrapper[4996]: I0228 11:00:03.879071 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea91fa8e-4683-45b9-b108-6b7f7c3aae28-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:04 crc kubenswrapper[4996]: I0228 11:00:04.361061 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" event={"ID":"ea91fa8e-4683-45b9-b108-6b7f7c3aae28","Type":"ContainerDied","Data":"aad0936970cbc2cfc36da0bec5d989d58febc0b2b15942646951ffa6110634d1"} Feb 28 11:00:04 crc kubenswrapper[4996]: I0228 11:00:04.361124 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aad0936970cbc2cfc36da0bec5d989d58febc0b2b15942646951ffa6110634d1" Feb 28 11:00:04 crc kubenswrapper[4996]: I0228 11:00:04.361165 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537940-z4mdw" Feb 28 11:00:04 crc kubenswrapper[4996]: I0228 11:00:04.423420 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx"] Feb 28 11:00:04 crc kubenswrapper[4996]: I0228 11:00:04.431117 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537895-n6zvx"] Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.048907 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bef7e16-096a-4e6e-98dc-77fea33afff9" path="/var/lib/kubelet/pods/6bef7e16-096a-4e6e-98dc-77fea33afff9/volumes" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.243870 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mh49d"] Feb 28 11:00:05 crc kubenswrapper[4996]: E0228 11:00:05.244776 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea91fa8e-4683-45b9-b108-6b7f7c3aae28" containerName="collect-profiles" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.244810 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea91fa8e-4683-45b9-b108-6b7f7c3aae28" containerName="collect-profiles" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.245156 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea91fa8e-4683-45b9-b108-6b7f7c3aae28" containerName="collect-profiles" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.247298 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.268334 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mh49d"] Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.374701 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" event={"ID":"a9b07b45-80e5-48e7-a828-37120ebe17c4","Type":"ContainerStarted","Data":"5d755368c2f7db3484adbf0950b50baaf29bae4abe28f0d2ab5cb3eef531741d"} Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.398488 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" podStartSLOduration=1.330241794 podStartE2EDuration="5.398467732s" podCreationTimestamp="2026-02-28 11:00:00 +0000 UTC" firstStartedPulling="2026-02-28 11:00:00.942116803 +0000 UTC m=+7164.632919614" lastFinishedPulling="2026-02-28 11:00:05.010342731 +0000 UTC m=+7168.701145552" observedRunningTime="2026-02-28 11:00:05.393053859 +0000 UTC m=+7169.083856680" watchObservedRunningTime="2026-02-28 11:00:05.398467732 +0000 UTC m=+7169.089270553" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.412614 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-catalog-content\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.412689 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfl8\" (UniqueName: \"kubernetes.io/projected/69b1986e-d53e-4757-b909-311a0b12f7ab-kube-api-access-scfl8\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.412761 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-utilities\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.514478 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-catalog-content\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.514528 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfl8\" (UniqueName: \"kubernetes.io/projected/69b1986e-d53e-4757-b909-311a0b12f7ab-kube-api-access-scfl8\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.514563 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-utilities\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.515055 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-catalog-content\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.515270 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-utilities\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.532874 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfl8\" (UniqueName: \"kubernetes.io/projected/69b1986e-d53e-4757-b909-311a0b12f7ab-kube-api-access-scfl8\") pod \"certified-operators-mh49d\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:05 crc kubenswrapper[4996]: I0228 11:00:05.584340 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:06 crc kubenswrapper[4996]: I0228 11:00:06.123035 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mh49d"] Feb 28 11:00:06 crc kubenswrapper[4996]: I0228 11:00:06.384109 4996 generic.go:334] "Generic (PLEG): container finished" podID="a9b07b45-80e5-48e7-a828-37120ebe17c4" containerID="5d755368c2f7db3484adbf0950b50baaf29bae4abe28f0d2ab5cb3eef531741d" exitCode=0 Feb 28 11:00:06 crc kubenswrapper[4996]: I0228 11:00:06.384210 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" event={"ID":"a9b07b45-80e5-48e7-a828-37120ebe17c4","Type":"ContainerDied","Data":"5d755368c2f7db3484adbf0950b50baaf29bae4abe28f0d2ab5cb3eef531741d"} Feb 28 11:00:06 crc kubenswrapper[4996]: I0228 11:00:06.386995 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"f6886487-0ab2-404d-aa70-4be59320885a","Type":"ContainerStarted","Data":"fb7664002177962bac5da489890ae2aa91714240db07dccbb05ae036c769a79b"} Feb 28 11:00:06 crc kubenswrapper[4996]: I0228 11:00:06.389020 4996 generic.go:334] "Generic (PLEG): container finished" podID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerID="35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6" exitCode=0 Feb 28 11:00:06 crc kubenswrapper[4996]: I0228 11:00:06.389071 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh49d" event={"ID":"69b1986e-d53e-4757-b909-311a0b12f7ab","Type":"ContainerDied","Data":"35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6"} Feb 28 11:00:06 crc kubenswrapper[4996]: I0228 11:00:06.389103 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh49d" event={"ID":"69b1986e-d53e-4757-b909-311a0b12f7ab","Type":"ContainerStarted","Data":"801e7a4cde2feddb99ba14e3ef8f2929ec375582a0c9c1a78734a7350ed51927"} Feb 28 11:00:06 crc kubenswrapper[4996]: I0228 11:00:06.419463 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ansibletest-ansibletest" podStartSLOduration=3.300465771 podStartE2EDuration="37.419444669s" podCreationTimestamp="2026-02-28 10:59:29 +0000 UTC" firstStartedPulling="2026-02-28 10:59:31.333311112 +0000 UTC m=+7135.024113923" lastFinishedPulling="2026-02-28 11:00:05.45229002 +0000 UTC m=+7169.143092821" observedRunningTime="2026-02-28 11:00:06.418621719 +0000 UTC m=+7170.109424540" watchObservedRunningTime="2026-02-28 11:00:06.419444669 +0000 UTC m=+7170.110247490" Feb 28 11:00:07 crc kubenswrapper[4996]: I0228 11:00:07.767398 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" Feb 28 11:00:07 crc kubenswrapper[4996]: I0228 11:00:07.858827 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt7wj\" (UniqueName: \"kubernetes.io/projected/a9b07b45-80e5-48e7-a828-37120ebe17c4-kube-api-access-dt7wj\") pod \"a9b07b45-80e5-48e7-a828-37120ebe17c4\" (UID: \"a9b07b45-80e5-48e7-a828-37120ebe17c4\") " Feb 28 11:00:07 crc kubenswrapper[4996]: I0228 11:00:07.865333 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b07b45-80e5-48e7-a828-37120ebe17c4-kube-api-access-dt7wj" (OuterVolumeSpecName: "kube-api-access-dt7wj") pod "a9b07b45-80e5-48e7-a828-37120ebe17c4" (UID: "a9b07b45-80e5-48e7-a828-37120ebe17c4"). InnerVolumeSpecName "kube-api-access-dt7wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:00:07 crc kubenswrapper[4996]: I0228 11:00:07.962587 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt7wj\" (UniqueName: \"kubernetes.io/projected/a9b07b45-80e5-48e7-a828-37120ebe17c4-kube-api-access-dt7wj\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:08 crc kubenswrapper[4996]: I0228 11:00:08.411029 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" Feb 28 11:00:08 crc kubenswrapper[4996]: I0228 11:00:08.410999 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537940-pw9mp" event={"ID":"a9b07b45-80e5-48e7-a828-37120ebe17c4","Type":"ContainerDied","Data":"37ab0f0181dd205584df4742efa9502033e28dcbd85e57b6c02d59fc07fa8577"} Feb 28 11:00:08 crc kubenswrapper[4996]: I0228 11:00:08.411139 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37ab0f0181dd205584df4742efa9502033e28dcbd85e57b6c02d59fc07fa8577" Feb 28 11:00:08 crc kubenswrapper[4996]: I0228 11:00:08.413114 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh49d" event={"ID":"69b1986e-d53e-4757-b909-311a0b12f7ab","Type":"ContainerStarted","Data":"2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588"} Feb 28 11:00:08 crc kubenswrapper[4996]: I0228 11:00:08.463041 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537934-vk9p6"] Feb 28 11:00:08 crc kubenswrapper[4996]: I0228 11:00:08.473250 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537934-vk9p6"] Feb 28 11:00:09 crc kubenswrapper[4996]: I0228 11:00:09.054885 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa" path="/var/lib/kubelet/pods/2a5d6553-a5b4-441e-9a9d-d251d8b2d3aa/volumes" Feb 28 11:00:09 crc kubenswrapper[4996]: I0228 11:00:09.426234 4996 generic.go:334] "Generic (PLEG): container finished" podID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerID="2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588" exitCode=0 Feb 28 11:00:09 crc kubenswrapper[4996]: I0228 11:00:09.426344 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh49d" event={"ID":"69b1986e-d53e-4757-b909-311a0b12f7ab","Type":"ContainerDied","Data":"2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588"} Feb 28 11:00:10 crc kubenswrapper[4996]: I0228 11:00:10.440813 4996 generic.go:334] "Generic (PLEG): container finished" podID="f6886487-0ab2-404d-aa70-4be59320885a" containerID="fb7664002177962bac5da489890ae2aa91714240db07dccbb05ae036c769a79b" exitCode=0 Feb 28 11:00:10 crc kubenswrapper[4996]: I0228 11:00:10.440922 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"f6886487-0ab2-404d-aa70-4be59320885a","Type":"ContainerDied","Data":"fb7664002177962bac5da489890ae2aa91714240db07dccbb05ae036c769a79b"} Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.452140 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh49d" event={"ID":"69b1986e-d53e-4757-b909-311a0b12f7ab","Type":"ContainerStarted","Data":"519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58"} Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.476333 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mh49d" podStartSLOduration=2.533649343 podStartE2EDuration="6.476309823s" podCreationTimestamp="2026-02-28 11:00:05 +0000 UTC" firstStartedPulling="2026-02-28 11:00:06.390916361 +0000 UTC m=+7170.081719192" lastFinishedPulling="2026-02-28 11:00:10.333576861 +0000 UTC m=+7174.024379672" observedRunningTime="2026-02-28 11:00:11.466729058 +0000 UTC m=+7175.157531889" watchObservedRunningTime="2026-02-28 11:00:11.476309823 +0000 UTC m=+7175.167112634" Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.805934 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955626 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ceph\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955737 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955765 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ca-certs\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955817 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-workdir\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955853 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-compute-ssh-secret\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955874 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hmpd\" (UniqueName: \"kubernetes.io/projected/f6886487-0ab2-404d-aa70-4be59320885a-kube-api-access-9hmpd\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955901 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-workload-ssh-secret\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955945 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config-secret\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.955965 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.956645 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-temporary\") pod \"f6886487-0ab2-404d-aa70-4be59320885a\" (UID: \"f6886487-0ab2-404d-aa70-4be59320885a\") " Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.956945 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.957264 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.961996 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6886487-0ab2-404d-aa70-4be59320885a-kube-api-access-9hmpd" (OuterVolumeSpecName: "kube-api-access-9hmpd") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "kube-api-access-9hmpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.963112 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ceph" (OuterVolumeSpecName: "ceph") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.966127 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 11:00:11 crc kubenswrapper[4996]: I0228 11:00:11.974752 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:11.992379 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-compute-ssh-secret" (OuterVolumeSpecName: "compute-ssh-secret") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "compute-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:11.992772 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.004332 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-workload-ssh-secret" (OuterVolumeSpecName: "workload-ssh-secret") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "workload-ssh-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.011294 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.024986 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "f6886487-0ab2-404d-aa70-4be59320885a" (UID: "f6886487-0ab2-404d-aa70-4be59320885a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060769 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060813 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f6886487-0ab2-404d-aa70-4be59320885a-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060827 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060871 4996 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060885 4996 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060897 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/f6886487-0ab2-404d-aa70-4be59320885a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060950 4996 reconciler_common.go:293] "Volume detached for volume \"compute-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-compute-ssh-secret\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060965 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hmpd\" (UniqueName: \"kubernetes.io/projected/f6886487-0ab2-404d-aa70-4be59320885a-kube-api-access-9hmpd\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.060977 4996 reconciler_common.go:293] "Volume detached for volume \"workload-ssh-secret\" (UniqueName: \"kubernetes.io/secret/f6886487-0ab2-404d-aa70-4be59320885a-workload-ssh-secret\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.084128 4996 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.165420 4996 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.249347 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.249409 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.249452 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.250151 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.250205 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" gracePeriod=600 Feb 28 11:00:12 crc kubenswrapper[4996]: E0228 11:00:12.368490 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.461252 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ansibletest-ansibletest" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.463522 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ansibletest-ansibletest" event={"ID":"f6886487-0ab2-404d-aa70-4be59320885a","Type":"ContainerDied","Data":"f22745281efb317ffcc138861bfe7e8e84feec3cd9de549c94fbae4c78317a92"} Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.463574 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f22745281efb317ffcc138861bfe7e8e84feec3cd9de549c94fbae4c78317a92" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.468116 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" exitCode=0 Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.469183 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e"} Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.469257 4996 scope.go:117] "RemoveContainer" containerID="2d7749c606980be590cdf9e1ba9745d9acb89d5faee0102da2aab1679aa836ba" Feb 28 11:00:12 crc kubenswrapper[4996]: I0228 11:00:12.469678 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:00:12 crc kubenswrapper[4996]: E0228 11:00:12.469999 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.829541 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Feb 28 11:00:14 crc kubenswrapper[4996]: E0228 11:00:14.830305 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6886487-0ab2-404d-aa70-4be59320885a" containerName="ansibletest-ansibletest" Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.830320 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6886487-0ab2-404d-aa70-4be59320885a" containerName="ansibletest-ansibletest" Feb 28 11:00:14 crc kubenswrapper[4996]: E0228 11:00:14.830351 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b07b45-80e5-48e7-a828-37120ebe17c4" containerName="oc" Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.830359 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b07b45-80e5-48e7-a828-37120ebe17c4" containerName="oc" Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.830809 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6886487-0ab2-404d-aa70-4be59320885a" containerName="ansibletest-ansibletest" Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.830842 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b07b45-80e5-48e7-a828-37120ebe17c4" containerName="oc" Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.831652 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.838979 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.923107 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r48dm\" (UniqueName: \"kubernetes.io/projected/502af5eb-df11-47d8-b386-7c8dc19e280c-kube-api-access-r48dm\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"502af5eb-df11-47d8-b386-7c8dc19e280c\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:14 crc kubenswrapper[4996]: I0228 11:00:14.923208 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"502af5eb-df11-47d8-b386-7c8dc19e280c\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.025173 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r48dm\" (UniqueName: \"kubernetes.io/projected/502af5eb-df11-47d8-b386-7c8dc19e280c-kube-api-access-r48dm\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"502af5eb-df11-47d8-b386-7c8dc19e280c\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.025222 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"502af5eb-df11-47d8-b386-7c8dc19e280c\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.025623 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"502af5eb-df11-47d8-b386-7c8dc19e280c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.051572 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r48dm\" (UniqueName: \"kubernetes.io/projected/502af5eb-df11-47d8-b386-7c8dc19e280c-kube-api-access-r48dm\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"502af5eb-df11-47d8-b386-7c8dc19e280c\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.054856 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-ansibletest-ansibletest-ansibletest\" (UID: \"502af5eb-df11-47d8-b386-7c8dc19e280c\") " pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.222332 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.585221 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.585466 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.632211 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:15 crc kubenswrapper[4996]: I0228 11:00:15.679229 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest"] Feb 28 11:00:16 crc kubenswrapper[4996]: I0228 11:00:16.514605 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"502af5eb-df11-47d8-b386-7c8dc19e280c","Type":"ContainerStarted","Data":"9489acdfe96cdae1a513bc660415031d504e685608d5acdfc454d87c26f689d4"} Feb 28 11:00:16 crc kubenswrapper[4996]: I0228 11:00:16.562756 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:16 crc kubenswrapper[4996]: I0228 11:00:16.614829 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mh49d"] Feb 28 11:00:18 crc kubenswrapper[4996]: I0228 11:00:18.531587 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mh49d" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerName="registry-server" containerID="cri-o://519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58" gracePeriod=2 Feb 28 11:00:18 crc kubenswrapper[4996]: I0228 11:00:18.992240 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.116221 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-catalog-content\") pod \"69b1986e-d53e-4757-b909-311a0b12f7ab\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.116298 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-utilities\") pod \"69b1986e-d53e-4757-b909-311a0b12f7ab\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.116458 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scfl8\" (UniqueName: \"kubernetes.io/projected/69b1986e-d53e-4757-b909-311a0b12f7ab-kube-api-access-scfl8\") pod \"69b1986e-d53e-4757-b909-311a0b12f7ab\" (UID: \"69b1986e-d53e-4757-b909-311a0b12f7ab\") " Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.117350 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-utilities" (OuterVolumeSpecName: "utilities") pod "69b1986e-d53e-4757-b909-311a0b12f7ab" (UID: "69b1986e-d53e-4757-b909-311a0b12f7ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.117789 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.121345 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b1986e-d53e-4757-b909-311a0b12f7ab-kube-api-access-scfl8" (OuterVolumeSpecName: "kube-api-access-scfl8") pod "69b1986e-d53e-4757-b909-311a0b12f7ab" (UID: "69b1986e-d53e-4757-b909-311a0b12f7ab"). InnerVolumeSpecName "kube-api-access-scfl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.175492 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69b1986e-d53e-4757-b909-311a0b12f7ab" (UID: "69b1986e-d53e-4757-b909-311a0b12f7ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.219636 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scfl8\" (UniqueName: \"kubernetes.io/projected/69b1986e-d53e-4757-b909-311a0b12f7ab-kube-api-access-scfl8\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.219980 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b1986e-d53e-4757-b909-311a0b12f7ab-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.545490 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" event={"ID":"502af5eb-df11-47d8-b386-7c8dc19e280c","Type":"ContainerStarted","Data":"fc83d58e170975a8fb5df5aa0da2f8500dbfb5695aca6836a12d567dc9141902"} Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.550712 4996 generic.go:334] "Generic (PLEG): container finished" podID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerID="519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58" exitCode=0 Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.550755 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh49d" event={"ID":"69b1986e-d53e-4757-b909-311a0b12f7ab","Type":"ContainerDied","Data":"519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58"} Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.550779 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mh49d" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.550922 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mh49d" event={"ID":"69b1986e-d53e-4757-b909-311a0b12f7ab","Type":"ContainerDied","Data":"801e7a4cde2feddb99ba14e3ef8f2929ec375582a0c9c1a78734a7350ed51927"} Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.551081 4996 scope.go:117] "RemoveContainer" containerID="519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.563466 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-ansibletest-ansibletest-ansibletest" podStartSLOduration=2.616823767 podStartE2EDuration="5.56344492s" podCreationTimestamp="2026-02-28 11:00:14 +0000 UTC" firstStartedPulling="2026-02-28 11:00:15.688284013 +0000 UTC m=+7179.379086824" lastFinishedPulling="2026-02-28 11:00:18.634905166 +0000 UTC m=+7182.325707977" observedRunningTime="2026-02-28 11:00:19.561752348 +0000 UTC m=+7183.252555169" watchObservedRunningTime="2026-02-28 11:00:19.56344492 +0000 UTC m=+7183.254247731" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.588193 4996 scope.go:117] "RemoveContainer" containerID="2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.598603 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mh49d"] Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.610396 4996 scope.go:117] "RemoveContainer" containerID="35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.611572 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mh49d"] Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.629317 4996 scope.go:117] "RemoveContainer" containerID="519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58" Feb 28 11:00:19 crc kubenswrapper[4996]: E0228 11:00:19.629922 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58\": container with ID starting with 519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58 not found: ID does not exist" containerID="519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.630063 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58"} err="failed to get container status \"519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58\": rpc error: code = NotFound desc = could not find container \"519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58\": container with ID starting with 519c1c4beccb1021d9f3ce1ac5e614603ed9e9b181960aa2a0a782484e44cf58 not found: ID does not exist" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.630220 4996 scope.go:117] "RemoveContainer" containerID="2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588" Feb 28 11:00:19 crc kubenswrapper[4996]: E0228 11:00:19.630761 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588\": container with ID starting with 2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588 not found: ID does not exist" containerID="2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.630810 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588"} err="failed to get container status \"2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588\": rpc error: code = NotFound desc = could not find container \"2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588\": container with ID starting with 2c3ba5a17924af539071640166a282186fcf7975738fccd8bd354d8009cdf588 not found: ID does not exist" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.630837 4996 scope.go:117] "RemoveContainer" containerID="35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6" Feb 28 11:00:19 crc kubenswrapper[4996]: E0228 11:00:19.631312 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6\": container with ID starting with 35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6 not found: ID does not exist" containerID="35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6" Feb 28 11:00:19 crc kubenswrapper[4996]: I0228 11:00:19.631397 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6"} err="failed to get container status \"35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6\": rpc error: code = NotFound desc = could not find container \"35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6\": container with ID starting with 35a610057568a7fa120f14c3feeb623b80e0cf1dfc2916c6fb38db502d2cc1d6 not found: ID does not exist" Feb 28 11:00:21 crc kubenswrapper[4996]: I0228 11:00:21.044142 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" path="/var/lib/kubelet/pods/69b1986e-d53e-4757-b909-311a0b12f7ab/volumes" Feb 28 11:00:26 crc kubenswrapper[4996]: I0228 11:00:26.035458 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:00:26 crc kubenswrapper[4996]: E0228 11:00:26.036244 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.426224 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizontest-tests-horizontest"] Feb 28 11:00:28 crc kubenswrapper[4996]: E0228 11:00:28.427039 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerName="extract-content" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.427051 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerName="extract-content" Feb 28 11:00:28 crc kubenswrapper[4996]: E0228 11:00:28.427068 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerName="extract-utilities" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.427074 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerName="extract-utilities" Feb 28 11:00:28 crc kubenswrapper[4996]: E0228 11:00:28.427085 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerName="registry-server" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.427092 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerName="registry-server" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.427302 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b1986e-d53e-4757-b909-311a0b12f7ab" containerName="registry-server" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.427897 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.433063 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizontest-tests-horizontesthorizontest-config" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.435806 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"test-operator-clouds-config" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.439921 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.509687 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.509954 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.510067 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.510162 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.510272 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b6hb\" (UniqueName: \"kubernetes.io/projected/915895e5-31ba-450f-b3e8-a385e5937353-kube-api-access-4b6hb\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.510347 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.510431 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.510558 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.612461 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.612733 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.612925 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.613147 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.613265 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.613378 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.613499 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.613642 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b6hb\" (UniqueName: \"kubernetes.io/projected/915895e5-31ba-450f-b3e8-a385e5937353-kube-api-access-4b6hb\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.613174 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.612957 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-temporary\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.614289 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-clouds-config\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.614579 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-workdir\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.627660 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-openstack-config-secret\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.627717 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ca-certs\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.628784 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ceph\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.630430 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b6hb\" (UniqueName: \"kubernetes.io/projected/915895e5-31ba-450f-b3e8-a385e5937353-kube-api-access-4b6hb\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.644322 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"horizontest-tests-horizontest\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:28 crc kubenswrapper[4996]: I0228 11:00:28.751129 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Feb 28 11:00:29 crc kubenswrapper[4996]: I0228 11:00:29.213324 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizontest-tests-horizontest"] Feb 28 11:00:29 crc kubenswrapper[4996]: I0228 11:00:29.634102 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"915895e5-31ba-450f-b3e8-a385e5937353","Type":"ContainerStarted","Data":"ddaba072c6bdd19e26c099c861c1267b8aa545ea0ff19ba67b131fe51f4603ec"} Feb 28 11:00:39 crc kubenswrapper[4996]: I0228 11:00:39.033061 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:00:39 crc kubenswrapper[4996]: E0228 11:00:39.033744 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:00:50 crc kubenswrapper[4996]: I0228 11:00:50.771655 4996 scope.go:117] "RemoveContainer" containerID="cbbc5d76d391f3b4e754ff52037e856d743037826c0aeb804f67cc219767902c" Feb 28 11:00:50 crc kubenswrapper[4996]: I0228 11:00:50.960633 4996 scope.go:117] "RemoveContainer" containerID="2b033874ec976c5b9edc5387276ef02a66da80d757b237549cb322f94ab1acf1" Feb 28 11:00:51 crc kubenswrapper[4996]: E0228 11:00:51.001807 4996 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizontest:current-podified" Feb 28 11:00:51 crc kubenswrapper[4996]: E0228 11:00:51.001989 4996 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizontest-tests-horizontest,Image:quay.io/podified-antelope-centos9/openstack-horizontest:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADMIN_PASSWORD,Value:12345678,ValueFrom:nil,},EnvVar{Name:ADMIN_USERNAME,Value:admin,ValueFrom:nil,},EnvVar{Name:AUTH_URL,Value:https://keystone-public-openstack.apps-crc.testing,ValueFrom:nil,},EnvVar{Name:DASHBOARD_URL,Value:https://horizon-openstack.apps-crc.testing/,ValueFrom:nil,},EnvVar{Name:EXTRA_FLAG,Value:not pagination and test_users.py,ValueFrom:nil,},EnvVar{Name:FLAVOR_NAME,Value:m1.tiny,ValueFrom:nil,},EnvVar{Name:HORIZONTEST_DEBUG_MODE,Value:false,ValueFrom:nil,},EnvVar{Name:HORIZON_KEYS_FOLDER,Value:/etc/test_operator,ValueFrom:nil,},EnvVar{Name:HORIZON_LOGS_DIR_NAME,Value:horizon,ValueFrom:nil,},EnvVar{Name:HORIZON_REPO_BRANCH,Value:master,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE,Value:/var/lib/horizontest/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:IMAGE_FILE_NAME,Value:cirros-0.6.2-x86_64-disk,ValueFrom:nil,},EnvVar{Name:IMAGE_URL,Value:http://download.cirros-cloud.net/0.6.2/cirros-0.6.2-x86_64-disk.img,ValueFrom:nil,},EnvVar{Name:PASSWORD,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:PROJECT_NAME_XPATH,Value://*[@class=\"context-project\"]//ancestor::ul,ValueFrom:nil,},EnvVar{Name:REPO_URL,Value:https://review.opendev.org/openstack/horizon,ValueFrom:nil,},EnvVar{Name:USER_NAME,Value:horizontest,ValueFrom:nil,},EnvVar{Name:USE_EXTERNAL_FILES,Value:True,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{2 0} {} 2 DecimalSI},memory: {{4294967296 0} {} 4Gi BinarySI},},Requests:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/horizontest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/horizontest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/var/lib/horizontest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-clouds-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ca-bundle.trust.crt,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4b6hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN NET_RAW],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42455,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42455,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizontest-tests-horizontest_openstack(915895e5-31ba-450f-b3e8-a385e5937353): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 11:00:51 crc kubenswrapper[4996]: E0228 11:00:51.003480 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizontest-tests-horizontest" podUID="915895e5-31ba-450f-b3e8-a385e5937353" Feb 28 11:00:51 crc kubenswrapper[4996]: E0228 11:00:51.825044 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizontest-tests-horizontest\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizontest:current-podified\\\"\"" pod="openstack/horizontest-tests-horizontest" podUID="915895e5-31ba-450f-b3e8-a385e5937353" Feb 28 11:00:53 crc kubenswrapper[4996]: I0228 11:00:53.033477 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:00:53 crc kubenswrapper[4996]: E0228 11:00:53.034074 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.159671 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29537941-pj4nm"] Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.161673 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.175422 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29537941-pj4nm"] Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.269273 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-fernet-keys\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.269670 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w72p\" (UniqueName: \"kubernetes.io/projected/9acff40e-9809-41c5-b307-388aa1a815d2-kube-api-access-5w72p\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.269871 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-config-data\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.270139 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-combined-ca-bundle\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.372684 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w72p\" (UniqueName: \"kubernetes.io/projected/9acff40e-9809-41c5-b307-388aa1a815d2-kube-api-access-5w72p\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.372744 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-config-data\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.372831 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-combined-ca-bundle\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.372870 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-fernet-keys\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.378945 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-config-data\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.379986 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-fernet-keys\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.380553 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-combined-ca-bundle\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.392075 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w72p\" (UniqueName: \"kubernetes.io/projected/9acff40e-9809-41c5-b307-388aa1a815d2-kube-api-access-5w72p\") pod \"keystone-cron-29537941-pj4nm\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.483969 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:00 crc kubenswrapper[4996]: I0228 11:01:00.986848 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29537941-pj4nm"] Feb 28 11:01:01 crc kubenswrapper[4996]: I0228 11:01:01.905741 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537941-pj4nm" event={"ID":"9acff40e-9809-41c5-b307-388aa1a815d2","Type":"ContainerStarted","Data":"4c4af7d1f11bfa95a9ed645a52d62132bfecd80185ee835ecae2c0e16954ee5b"} Feb 28 11:01:01 crc kubenswrapper[4996]: I0228 11:01:01.906409 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537941-pj4nm" event={"ID":"9acff40e-9809-41c5-b307-388aa1a815d2","Type":"ContainerStarted","Data":"9fff655c94c63927d53b2fcc8418aa9807876e9ee7b408120375abff93b0ee27"} Feb 28 11:01:01 crc kubenswrapper[4996]: I0228 11:01:01.925826 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29537941-pj4nm" podStartSLOduration=1.9258067250000002 podStartE2EDuration="1.925806725s" podCreationTimestamp="2026-02-28 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 11:01:01.919558563 +0000 UTC m=+7225.610361374" watchObservedRunningTime="2026-02-28 11:01:01.925806725 +0000 UTC m=+7225.616609536" Feb 28 11:01:03 crc kubenswrapper[4996]: I0228 11:01:03.921440 4996 generic.go:334] "Generic (PLEG): container finished" podID="9acff40e-9809-41c5-b307-388aa1a815d2" containerID="4c4af7d1f11bfa95a9ed645a52d62132bfecd80185ee835ecae2c0e16954ee5b" exitCode=0 Feb 28 11:01:03 crc kubenswrapper[4996]: I0228 11:01:03.921640 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537941-pj4nm" event={"ID":"9acff40e-9809-41c5-b307-388aa1a815d2","Type":"ContainerDied","Data":"4c4af7d1f11bfa95a9ed645a52d62132bfecd80185ee835ecae2c0e16954ee5b"} Feb 28 11:01:04 crc kubenswrapper[4996]: I0228 11:01:04.046188 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:01:04 crc kubenswrapper[4996]: E0228 11:01:04.046419 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.295698 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.485540 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-config-data\") pod \"9acff40e-9809-41c5-b307-388aa1a815d2\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.485601 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-combined-ca-bundle\") pod \"9acff40e-9809-41c5-b307-388aa1a815d2\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.486313 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-fernet-keys\") pod \"9acff40e-9809-41c5-b307-388aa1a815d2\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.486364 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w72p\" (UniqueName: \"kubernetes.io/projected/9acff40e-9809-41c5-b307-388aa1a815d2-kube-api-access-5w72p\") pod \"9acff40e-9809-41c5-b307-388aa1a815d2\" (UID: \"9acff40e-9809-41c5-b307-388aa1a815d2\") " Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.492943 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9acff40e-9809-41c5-b307-388aa1a815d2" (UID: "9acff40e-9809-41c5-b307-388aa1a815d2"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.499290 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9acff40e-9809-41c5-b307-388aa1a815d2-kube-api-access-5w72p" (OuterVolumeSpecName: "kube-api-access-5w72p") pod "9acff40e-9809-41c5-b307-388aa1a815d2" (UID: "9acff40e-9809-41c5-b307-388aa1a815d2"). InnerVolumeSpecName "kube-api-access-5w72p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.529066 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9acff40e-9809-41c5-b307-388aa1a815d2" (UID: "9acff40e-9809-41c5-b307-388aa1a815d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.556088 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-config-data" (OuterVolumeSpecName: "config-data") pod "9acff40e-9809-41c5-b307-388aa1a815d2" (UID: "9acff40e-9809-41c5-b307-388aa1a815d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.588597 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w72p\" (UniqueName: \"kubernetes.io/projected/9acff40e-9809-41c5-b307-388aa1a815d2-kube-api-access-5w72p\") on node \"crc\" DevicePath \"\"" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.588653 4996 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.588672 4996 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.588687 4996 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9acff40e-9809-41c5-b307-388aa1a815d2-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.943777 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"915895e5-31ba-450f-b3e8-a385e5937353","Type":"ContainerStarted","Data":"d9e282eaa6b943e44dec967da112367f62eab3d35707ec58b4a677cfe1909b25"} Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.945361 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537941-pj4nm" event={"ID":"9acff40e-9809-41c5-b307-388aa1a815d2","Type":"ContainerDied","Data":"9fff655c94c63927d53b2fcc8418aa9807876e9ee7b408120375abff93b0ee27"} Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.945403 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fff655c94c63927d53b2fcc8418aa9807876e9ee7b408120375abff93b0ee27" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.945417 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537941-pj4nm" Feb 28 11:01:05 crc kubenswrapper[4996]: I0228 11:01:05.964129 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizontest-tests-horizontest" podStartSLOduration=3.3845674519999998 podStartE2EDuration="38.964099081s" podCreationTimestamp="2026-02-28 11:00:27 +0000 UTC" firstStartedPulling="2026-02-28 11:00:29.216592401 +0000 UTC m=+7192.907395232" lastFinishedPulling="2026-02-28 11:01:04.79612403 +0000 UTC m=+7228.486926861" observedRunningTime="2026-02-28 11:01:05.959927038 +0000 UTC m=+7229.650729849" watchObservedRunningTime="2026-02-28 11:01:05.964099081 +0000 UTC m=+7229.654901892" Feb 28 11:01:17 crc kubenswrapper[4996]: I0228 11:01:17.039163 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:01:17 crc kubenswrapper[4996]: E0228 11:01:17.040046 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:01:28 crc kubenswrapper[4996]: I0228 11:01:28.033677 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:01:28 crc kubenswrapper[4996]: E0228 11:01:28.034873 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:01:40 crc kubenswrapper[4996]: I0228 11:01:40.034096 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:01:40 crc kubenswrapper[4996]: E0228 11:01:40.034768 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.368785 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5qsgw"] Feb 28 11:01:43 crc kubenswrapper[4996]: E0228 11:01:43.369698 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9acff40e-9809-41c5-b307-388aa1a815d2" containerName="keystone-cron" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.369712 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acff40e-9809-41c5-b307-388aa1a815d2" containerName="keystone-cron" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.369910 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="9acff40e-9809-41c5-b307-388aa1a815d2" containerName="keystone-cron" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.371159 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.384429 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qsgw"] Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.392972 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-utilities\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.393035 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7gp\" (UniqueName: \"kubernetes.io/projected/02132e30-9c7f-478c-9259-10c18aa6b8d2-kube-api-access-cn7gp\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.393072 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-catalog-content\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.495155 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-utilities\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.495469 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn7gp\" (UniqueName: \"kubernetes.io/projected/02132e30-9c7f-478c-9259-10c18aa6b8d2-kube-api-access-cn7gp\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.495504 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-catalog-content\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.495737 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-utilities\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.496128 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-catalog-content\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.515890 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn7gp\" (UniqueName: \"kubernetes.io/projected/02132e30-9c7f-478c-9259-10c18aa6b8d2-kube-api-access-cn7gp\") pod \"community-operators-5qsgw\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:43 crc kubenswrapper[4996]: I0228 11:01:43.693678 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:44 crc kubenswrapper[4996]: I0228 11:01:44.101854 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5qsgw"] Feb 28 11:01:44 crc kubenswrapper[4996]: W0228 11:01:44.102150 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02132e30_9c7f_478c_9259_10c18aa6b8d2.slice/crio-717e13b12b3bd0a59b0e6b77397c92e4e6da84e7acfbc0acb0920a233cee8ce5 WatchSource:0}: Error finding container 717e13b12b3bd0a59b0e6b77397c92e4e6da84e7acfbc0acb0920a233cee8ce5: Status 404 returned error can't find the container with id 717e13b12b3bd0a59b0e6b77397c92e4e6da84e7acfbc0acb0920a233cee8ce5 Feb 28 11:01:44 crc kubenswrapper[4996]: I0228 11:01:44.319108 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qsgw" event={"ID":"02132e30-9c7f-478c-9259-10c18aa6b8d2","Type":"ContainerStarted","Data":"22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9"} Feb 28 11:01:44 crc kubenswrapper[4996]: I0228 11:01:44.319491 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qsgw" event={"ID":"02132e30-9c7f-478c-9259-10c18aa6b8d2","Type":"ContainerStarted","Data":"717e13b12b3bd0a59b0e6b77397c92e4e6da84e7acfbc0acb0920a233cee8ce5"} Feb 28 11:01:45 crc kubenswrapper[4996]: I0228 11:01:45.329770 4996 generic.go:334] "Generic (PLEG): container finished" podID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerID="22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9" exitCode=0 Feb 28 11:01:45 crc kubenswrapper[4996]: I0228 11:01:45.329833 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qsgw" event={"ID":"02132e30-9c7f-478c-9259-10c18aa6b8d2","Type":"ContainerDied","Data":"22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9"} Feb 28 11:01:46 crc kubenswrapper[4996]: I0228 11:01:46.338896 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qsgw" event={"ID":"02132e30-9c7f-478c-9259-10c18aa6b8d2","Type":"ContainerStarted","Data":"79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2"} Feb 28 11:01:48 crc kubenswrapper[4996]: I0228 11:01:48.358824 4996 generic.go:334] "Generic (PLEG): container finished" podID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerID="79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2" exitCode=0 Feb 28 11:01:48 crc kubenswrapper[4996]: I0228 11:01:48.358898 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qsgw" event={"ID":"02132e30-9c7f-478c-9259-10c18aa6b8d2","Type":"ContainerDied","Data":"79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2"} Feb 28 11:01:49 crc kubenswrapper[4996]: I0228 11:01:49.373508 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qsgw" event={"ID":"02132e30-9c7f-478c-9259-10c18aa6b8d2","Type":"ContainerStarted","Data":"3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19"} Feb 28 11:01:49 crc kubenswrapper[4996]: I0228 11:01:49.396895 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5qsgw" podStartSLOduration=2.989909254 podStartE2EDuration="6.396876788s" podCreationTimestamp="2026-02-28 11:01:43 +0000 UTC" firstStartedPulling="2026-02-28 11:01:45.332607257 +0000 UTC m=+7269.023410068" lastFinishedPulling="2026-02-28 11:01:48.739574791 +0000 UTC m=+7272.430377602" observedRunningTime="2026-02-28 11:01:49.393110726 +0000 UTC m=+7273.083913537" watchObservedRunningTime="2026-02-28 11:01:49.396876788 +0000 UTC m=+7273.087679619" Feb 28 11:01:53 crc kubenswrapper[4996]: I0228 11:01:53.694230 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:53 crc kubenswrapper[4996]: I0228 11:01:53.694763 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:01:54 crc kubenswrapper[4996]: I0228 11:01:54.754703 4996 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5qsgw" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="registry-server" probeResult="failure" output=< Feb 28 11:01:54 crc kubenswrapper[4996]: timeout: failed to connect service ":50051" within 1s Feb 28 11:01:54 crc kubenswrapper[4996]: > Feb 28 11:01:55 crc kubenswrapper[4996]: I0228 11:01:55.033373 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:01:55 crc kubenswrapper[4996]: E0228 11:01:55.033645 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.150595 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537942-fwrs7"] Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.152269 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537942-fwrs7" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.155062 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.155280 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.155381 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.172299 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537942-fwrs7"] Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.220032 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rx9f\" (UniqueName: \"kubernetes.io/projected/3b24cca9-b985-4d62-bcf6-6f5327a53251-kube-api-access-7rx9f\") pod \"auto-csr-approver-29537942-fwrs7\" (UID: \"3b24cca9-b985-4d62-bcf6-6f5327a53251\") " pod="openshift-infra/auto-csr-approver-29537942-fwrs7" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.325502 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rx9f\" (UniqueName: \"kubernetes.io/projected/3b24cca9-b985-4d62-bcf6-6f5327a53251-kube-api-access-7rx9f\") pod \"auto-csr-approver-29537942-fwrs7\" (UID: \"3b24cca9-b985-4d62-bcf6-6f5327a53251\") " pod="openshift-infra/auto-csr-approver-29537942-fwrs7" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.353492 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rx9f\" (UniqueName: \"kubernetes.io/projected/3b24cca9-b985-4d62-bcf6-6f5327a53251-kube-api-access-7rx9f\") pod \"auto-csr-approver-29537942-fwrs7\" (UID: \"3b24cca9-b985-4d62-bcf6-6f5327a53251\") " pod="openshift-infra/auto-csr-approver-29537942-fwrs7" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.484240 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537942-fwrs7" Feb 28 11:02:00 crc kubenswrapper[4996]: I0228 11:02:00.977576 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537942-fwrs7"] Feb 28 11:02:01 crc kubenswrapper[4996]: I0228 11:02:01.468805 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537942-fwrs7" event={"ID":"3b24cca9-b985-4d62-bcf6-6f5327a53251","Type":"ContainerStarted","Data":"d005f7ade1e24b29d8e90c0a589813acfacf5da994c9af94c26c1331fafa831d"} Feb 28 11:02:02 crc kubenswrapper[4996]: I0228 11:02:02.479107 4996 generic.go:334] "Generic (PLEG): container finished" podID="3b24cca9-b985-4d62-bcf6-6f5327a53251" containerID="7aa6890b8748cc6822c6a7ce338b3177e9dd678c8644750602f61854c46c059a" exitCode=0 Feb 28 11:02:02 crc kubenswrapper[4996]: I0228 11:02:02.479311 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537942-fwrs7" event={"ID":"3b24cca9-b985-4d62-bcf6-6f5327a53251","Type":"ContainerDied","Data":"7aa6890b8748cc6822c6a7ce338b3177e9dd678c8644750602f61854c46c059a"} Feb 28 11:02:03 crc kubenswrapper[4996]: I0228 11:02:03.751155 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:02:03 crc kubenswrapper[4996]: I0228 11:02:03.810516 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:02:03 crc kubenswrapper[4996]: I0228 11:02:03.861370 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537942-fwrs7" Feb 28 11:02:03 crc kubenswrapper[4996]: I0228 11:02:03.912560 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rx9f\" (UniqueName: \"kubernetes.io/projected/3b24cca9-b985-4d62-bcf6-6f5327a53251-kube-api-access-7rx9f\") pod \"3b24cca9-b985-4d62-bcf6-6f5327a53251\" (UID: \"3b24cca9-b985-4d62-bcf6-6f5327a53251\") " Feb 28 11:02:03 crc kubenswrapper[4996]: I0228 11:02:03.932227 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b24cca9-b985-4d62-bcf6-6f5327a53251-kube-api-access-7rx9f" (OuterVolumeSpecName: "kube-api-access-7rx9f") pod "3b24cca9-b985-4d62-bcf6-6f5327a53251" (UID: "3b24cca9-b985-4d62-bcf6-6f5327a53251"). InnerVolumeSpecName "kube-api-access-7rx9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:02:03 crc kubenswrapper[4996]: I0228 11:02:03.997523 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qsgw"] Feb 28 11:02:04 crc kubenswrapper[4996]: I0228 11:02:04.015381 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rx9f\" (UniqueName: \"kubernetes.io/projected/3b24cca9-b985-4d62-bcf6-6f5327a53251-kube-api-access-7rx9f\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:04 crc kubenswrapper[4996]: I0228 11:02:04.500823 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537942-fwrs7" event={"ID":"3b24cca9-b985-4d62-bcf6-6f5327a53251","Type":"ContainerDied","Data":"d005f7ade1e24b29d8e90c0a589813acfacf5da994c9af94c26c1331fafa831d"} Feb 28 11:02:04 crc kubenswrapper[4996]: I0228 11:02:04.500871 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d005f7ade1e24b29d8e90c0a589813acfacf5da994c9af94c26c1331fafa831d" Feb 28 11:02:04 crc kubenswrapper[4996]: I0228 11:02:04.500879 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537942-fwrs7" Feb 28 11:02:04 crc kubenswrapper[4996]: I0228 11:02:04.930991 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537936-pqhss"] Feb 28 11:02:04 crc kubenswrapper[4996]: I0228 11:02:04.948096 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537936-pqhss"] Feb 28 11:02:05 crc kubenswrapper[4996]: I0228 11:02:05.043857 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5781ac45-2386-4e29-90e8-2d8b81401ace" path="/var/lib/kubelet/pods/5781ac45-2386-4e29-90e8-2d8b81401ace/volumes" Feb 28 11:02:05 crc kubenswrapper[4996]: I0228 11:02:05.508778 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5qsgw" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="registry-server" containerID="cri-o://3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19" gracePeriod=2 Feb 28 11:02:05 crc kubenswrapper[4996]: I0228 11:02:05.990993 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.156453 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-utilities\") pod \"02132e30-9c7f-478c-9259-10c18aa6b8d2\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.156896 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-catalog-content\") pod \"02132e30-9c7f-478c-9259-10c18aa6b8d2\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.156971 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn7gp\" (UniqueName: \"kubernetes.io/projected/02132e30-9c7f-478c-9259-10c18aa6b8d2-kube-api-access-cn7gp\") pod \"02132e30-9c7f-478c-9259-10c18aa6b8d2\" (UID: \"02132e30-9c7f-478c-9259-10c18aa6b8d2\") " Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.157539 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-utilities" (OuterVolumeSpecName: "utilities") pod "02132e30-9c7f-478c-9259-10c18aa6b8d2" (UID: "02132e30-9c7f-478c-9259-10c18aa6b8d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.165333 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02132e30-9c7f-478c-9259-10c18aa6b8d2-kube-api-access-cn7gp" (OuterVolumeSpecName: "kube-api-access-cn7gp") pod "02132e30-9c7f-478c-9259-10c18aa6b8d2" (UID: "02132e30-9c7f-478c-9259-10c18aa6b8d2"). InnerVolumeSpecName "kube-api-access-cn7gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.222618 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02132e30-9c7f-478c-9259-10c18aa6b8d2" (UID: "02132e30-9c7f-478c-9259-10c18aa6b8d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.259203 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn7gp\" (UniqueName: \"kubernetes.io/projected/02132e30-9c7f-478c-9259-10c18aa6b8d2-kube-api-access-cn7gp\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.259241 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.259251 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02132e30-9c7f-478c-9259-10c18aa6b8d2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.518440 4996 generic.go:334] "Generic (PLEG): container finished" podID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerID="3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19" exitCode=0 Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.518492 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qsgw" event={"ID":"02132e30-9c7f-478c-9259-10c18aa6b8d2","Type":"ContainerDied","Data":"3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19"} Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.518522 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5qsgw" event={"ID":"02132e30-9c7f-478c-9259-10c18aa6b8d2","Type":"ContainerDied","Data":"717e13b12b3bd0a59b0e6b77397c92e4e6da84e7acfbc0acb0920a233cee8ce5"} Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.518543 4996 scope.go:117] "RemoveContainer" containerID="3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.518696 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5qsgw" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.557046 4996 scope.go:117] "RemoveContainer" containerID="79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.558851 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5qsgw"] Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.568062 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5qsgw"] Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.579604 4996 scope.go:117] "RemoveContainer" containerID="22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.625651 4996 scope.go:117] "RemoveContainer" containerID="3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19" Feb 28 11:02:06 crc kubenswrapper[4996]: E0228 11:02:06.626062 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19\": container with ID starting with 3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19 not found: ID does not exist" containerID="3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.626120 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19"} err="failed to get container status \"3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19\": rpc error: code = NotFound desc = could not find container \"3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19\": container with ID starting with 3536c37f6492c4c54a134b5f74f9d4fb58eb99016db2f003871e880533ab0e19 not found: ID does not exist" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.626405 4996 scope.go:117] "RemoveContainer" containerID="79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2" Feb 28 11:02:06 crc kubenswrapper[4996]: E0228 11:02:06.627164 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2\": container with ID starting with 79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2 not found: ID does not exist" containerID="79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.627195 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2"} err="failed to get container status \"79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2\": rpc error: code = NotFound desc = could not find container \"79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2\": container with ID starting with 79564a2eda8884257bea3aa5260e28167cda30d626962645398bc3c53babb0b2 not found: ID does not exist" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.627220 4996 scope.go:117] "RemoveContainer" containerID="22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9" Feb 28 11:02:06 crc kubenswrapper[4996]: E0228 11:02:06.627511 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9\": container with ID starting with 22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9 not found: ID does not exist" containerID="22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9" Feb 28 11:02:06 crc kubenswrapper[4996]: I0228 11:02:06.627546 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9"} err="failed to get container status \"22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9\": rpc error: code = NotFound desc = could not find container \"22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9\": container with ID starting with 22050880af3566f1a8e8310f0ab7f7f2ba5107c81c9b6f740948b47cc38d85e9 not found: ID does not exist" Feb 28 11:02:07 crc kubenswrapper[4996]: I0228 11:02:07.051770 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" path="/var/lib/kubelet/pods/02132e30-9c7f-478c-9259-10c18aa6b8d2/volumes" Feb 28 11:02:09 crc kubenswrapper[4996]: I0228 11:02:09.034173 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:02:09 crc kubenswrapper[4996]: E0228 11:02:09.034721 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:02:22 crc kubenswrapper[4996]: I0228 11:02:22.033036 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:02:22 crc kubenswrapper[4996]: E0228 11:02:22.033898 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:02:33 crc kubenswrapper[4996]: I0228 11:02:33.032854 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:02:33 crc kubenswrapper[4996]: E0228 11:02:33.033653 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:02:44 crc kubenswrapper[4996]: I0228 11:02:44.033457 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:02:44 crc kubenswrapper[4996]: E0228 11:02:44.034306 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:02:51 crc kubenswrapper[4996]: I0228 11:02:51.079368 4996 scope.go:117] "RemoveContainer" containerID="407096447e3af8433244879b3c40c90a9681311534b577e80928bb6b6e53bc09" Feb 28 11:02:55 crc kubenswrapper[4996]: I0228 11:02:55.032968 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:02:55 crc kubenswrapper[4996]: E0228 11:02:55.033740 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:02:56 crc kubenswrapper[4996]: I0228 11:02:56.989732 4996 generic.go:334] "Generic (PLEG): container finished" podID="915895e5-31ba-450f-b3e8-a385e5937353" containerID="d9e282eaa6b943e44dec967da112367f62eab3d35707ec58b4a677cfe1909b25" exitCode=0 Feb 28 11:02:56 crc kubenswrapper[4996]: I0228 11:02:56.989845 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"915895e5-31ba-450f-b3e8-a385e5937353","Type":"ContainerDied","Data":"d9e282eaa6b943e44dec967da112367f62eab3d35707ec58b4a677cfe1909b25"} Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.353522 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.478650 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-temporary\") pod \"915895e5-31ba-450f-b3e8-a385e5937353\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.478716 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-workdir\") pod \"915895e5-31ba-450f-b3e8-a385e5937353\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.478752 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-openstack-config-secret\") pod \"915895e5-31ba-450f-b3e8-a385e5937353\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.478788 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ca-certs\") pod \"915895e5-31ba-450f-b3e8-a385e5937353\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.478836 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b6hb\" (UniqueName: \"kubernetes.io/projected/915895e5-31ba-450f-b3e8-a385e5937353-kube-api-access-4b6hb\") pod \"915895e5-31ba-450f-b3e8-a385e5937353\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.478926 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-clouds-config\") pod \"915895e5-31ba-450f-b3e8-a385e5937353\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.478984 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ceph\") pod \"915895e5-31ba-450f-b3e8-a385e5937353\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.479134 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"915895e5-31ba-450f-b3e8-a385e5937353\" (UID: \"915895e5-31ba-450f-b3e8-a385e5937353\") " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.480137 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "915895e5-31ba-450f-b3e8-a385e5937353" (UID: "915895e5-31ba-450f-b3e8-a385e5937353"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.486098 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ceph" (OuterVolumeSpecName: "ceph") pod "915895e5-31ba-450f-b3e8-a385e5937353" (UID: "915895e5-31ba-450f-b3e8-a385e5937353"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.492821 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915895e5-31ba-450f-b3e8-a385e5937353-kube-api-access-4b6hb" (OuterVolumeSpecName: "kube-api-access-4b6hb") pod "915895e5-31ba-450f-b3e8-a385e5937353" (UID: "915895e5-31ba-450f-b3e8-a385e5937353"). InnerVolumeSpecName "kube-api-access-4b6hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.494299 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "915895e5-31ba-450f-b3e8-a385e5937353" (UID: "915895e5-31ba-450f-b3e8-a385e5937353"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.507431 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "915895e5-31ba-450f-b3e8-a385e5937353" (UID: "915895e5-31ba-450f-b3e8-a385e5937353"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.524784 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-clouds-config" (OuterVolumeSpecName: "test-operator-clouds-config") pod "915895e5-31ba-450f-b3e8-a385e5937353" (UID: "915895e5-31ba-450f-b3e8-a385e5937353"). InnerVolumeSpecName "test-operator-clouds-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.532300 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "915895e5-31ba-450f-b3e8-a385e5937353" (UID: "915895e5-31ba-450f-b3e8-a385e5937353"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.581675 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-clouds-config\" (UniqueName: \"kubernetes.io/configmap/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-clouds-config\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.582244 4996 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ceph\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.582348 4996 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.582412 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.582479 4996 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.582537 4996 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/915895e5-31ba-450f-b3e8-a385e5937353-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.582604 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b6hb\" (UniqueName: \"kubernetes.io/projected/915895e5-31ba-450f-b3e8-a385e5937353-kube-api-access-4b6hb\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.599317 4996 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.684828 4996 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.691946 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "915895e5-31ba-450f-b3e8-a385e5937353" (UID: "915895e5-31ba-450f-b3e8-a385e5937353"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:02:58 crc kubenswrapper[4996]: I0228 11:02:58.787363 4996 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/915895e5-31ba-450f-b3e8-a385e5937353-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 11:02:59 crc kubenswrapper[4996]: I0228 11:02:59.007538 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizontest-tests-horizontest" event={"ID":"915895e5-31ba-450f-b3e8-a385e5937353","Type":"ContainerDied","Data":"ddaba072c6bdd19e26c099c861c1267b8aa545ea0ff19ba67b131fe51f4603ec"} Feb 28 11:02:59 crc kubenswrapper[4996]: I0228 11:02:59.007576 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddaba072c6bdd19e26c099c861c1267b8aa545ea0ff19ba67b131fe51f4603ec" Feb 28 11:02:59 crc kubenswrapper[4996]: I0228 11:02:59.007625 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizontest-tests-horizontest" Feb 28 11:03:07 crc kubenswrapper[4996]: I0228 11:03:07.040734 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:03:07 crc kubenswrapper[4996]: E0228 11:03:07.041575 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.991547 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Feb 28 11:03:09 crc kubenswrapper[4996]: E0228 11:03:09.992650 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="registry-server" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.992669 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="registry-server" Feb 28 11:03:09 crc kubenswrapper[4996]: E0228 11:03:09.992685 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="extract-utilities" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.992694 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="extract-utilities" Feb 28 11:03:09 crc kubenswrapper[4996]: E0228 11:03:09.992716 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b24cca9-b985-4d62-bcf6-6f5327a53251" containerName="oc" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.992725 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b24cca9-b985-4d62-bcf6-6f5327a53251" containerName="oc" Feb 28 11:03:09 crc kubenswrapper[4996]: E0228 11:03:09.992747 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="extract-content" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.992756 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="extract-content" Feb 28 11:03:09 crc kubenswrapper[4996]: E0228 11:03:09.992776 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915895e5-31ba-450f-b3e8-a385e5937353" containerName="horizontest-tests-horizontest" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.992785 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="915895e5-31ba-450f-b3e8-a385e5937353" containerName="horizontest-tests-horizontest" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.993029 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="915895e5-31ba-450f-b3e8-a385e5937353" containerName="horizontest-tests-horizontest" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.993060 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b24cca9-b985-4d62-bcf6-6f5327a53251" containerName="oc" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.993084 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="02132e30-9c7f-478c-9259-10c18aa6b8d2" containerName="registry-server" Feb 28 11:03:09 crc kubenswrapper[4996]: I0228 11:03:09.993836 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.012411 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.112082 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6r5p\" (UniqueName: \"kubernetes.io/projected/c025b832-dde4-4bf3-ada7-0a882c92dd0b-kube-api-access-p6r5p\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"c025b832-dde4-4bf3-ada7-0a882c92dd0b\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.112195 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"c025b832-dde4-4bf3-ada7-0a882c92dd0b\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.214589 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6r5p\" (UniqueName: \"kubernetes.io/projected/c025b832-dde4-4bf3-ada7-0a882c92dd0b-kube-api-access-p6r5p\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"c025b832-dde4-4bf3-ada7-0a882c92dd0b\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.214859 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"c025b832-dde4-4bf3-ada7-0a882c92dd0b\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.215397 4996 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"c025b832-dde4-4bf3-ada7-0a882c92dd0b\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.242652 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6r5p\" (UniqueName: \"kubernetes.io/projected/c025b832-dde4-4bf3-ada7-0a882c92dd0b-kube-api-access-p6r5p\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"c025b832-dde4-4bf3-ada7-0a882c92dd0b\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.266835 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-horizontest-horizontest-tests-horizontest\" (UID: \"c025b832-dde4-4bf3-ada7-0a882c92dd0b\") " pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.316921 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" Feb 28 11:03:10 crc kubenswrapper[4996]: E0228 11:03:10.317174 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:03:10 crc kubenswrapper[4996]: I0228 11:03:10.830953 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest"] Feb 28 11:03:10 crc kubenswrapper[4996]: E0228 11:03:10.835736 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:03:11 crc kubenswrapper[4996]: I0228 11:03:11.151696 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"c025b832-dde4-4bf3-ada7-0a882c92dd0b","Type":"ContainerStarted","Data":"f9385522c102f335d1a4d5f5bf74d9c1f592d411ed55212269e7555f4f019bc8"} Feb 28 11:03:11 crc kubenswrapper[4996]: E0228 11:03:11.899110 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:03:12 crc kubenswrapper[4996]: I0228 11:03:12.159830 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" event={"ID":"c025b832-dde4-4bf3-ada7-0a882c92dd0b","Type":"ContainerStarted","Data":"f777de4eaee959683f6ba42a2932ee654e834c81f3d4cfe443e82faa08b3b87f"} Feb 28 11:03:12 crc kubenswrapper[4996]: E0228 11:03:12.160653 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:03:12 crc kubenswrapper[4996]: I0228 11:03:12.175072 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-horizontest-horizontest-tests-horizontest" podStartSLOduration=2.113452541 podStartE2EDuration="3.175051294s" podCreationTimestamp="2026-02-28 11:03:09 +0000 UTC" firstStartedPulling="2026-02-28 11:03:10.837442157 +0000 UTC m=+7354.528244968" lastFinishedPulling="2026-02-28 11:03:11.89904089 +0000 UTC m=+7355.589843721" observedRunningTime="2026-02-28 11:03:12.172995834 +0000 UTC m=+7355.863798635" watchObservedRunningTime="2026-02-28 11:03:12.175051294 +0000 UTC m=+7355.865854115" Feb 28 11:03:13 crc kubenswrapper[4996]: E0228 11:03:13.170185 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:03:22 crc kubenswrapper[4996]: I0228 11:03:22.033611 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:03:22 crc kubenswrapper[4996]: E0228 11:03:22.034722 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.033113 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:03:35 crc kubenswrapper[4996]: E0228 11:03:35.034334 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.567398 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9txfd/must-gather-9fssl"] Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.569042 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.570426 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-9txfd"/"default-dockercfg-9966j" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.571751 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9txfd"/"openshift-service-ca.crt" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.572095 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-9txfd"/"kube-root-ca.crt" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.590670 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9txfd/must-gather-9fssl"] Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.648543 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15c2b360-7bd2-47c0-80ab-15cca738eb8c-must-gather-output\") pod \"must-gather-9fssl\" (UID: \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\") " pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.648649 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbq9\" (UniqueName: \"kubernetes.io/projected/15c2b360-7bd2-47c0-80ab-15cca738eb8c-kube-api-access-tkbq9\") pod \"must-gather-9fssl\" (UID: \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\") " pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.750767 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15c2b360-7bd2-47c0-80ab-15cca738eb8c-must-gather-output\") pod \"must-gather-9fssl\" (UID: \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\") " pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.750865 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkbq9\" (UniqueName: \"kubernetes.io/projected/15c2b360-7bd2-47c0-80ab-15cca738eb8c-kube-api-access-tkbq9\") pod \"must-gather-9fssl\" (UID: \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\") " pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.751441 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15c2b360-7bd2-47c0-80ab-15cca738eb8c-must-gather-output\") pod \"must-gather-9fssl\" (UID: \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\") " pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.780445 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkbq9\" (UniqueName: \"kubernetes.io/projected/15c2b360-7bd2-47c0-80ab-15cca738eb8c-kube-api-access-tkbq9\") pod \"must-gather-9fssl\" (UID: \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\") " pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:03:35 crc kubenswrapper[4996]: I0228 11:03:35.890621 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:03:36 crc kubenswrapper[4996]: I0228 11:03:36.362107 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-9txfd/must-gather-9fssl"] Feb 28 11:03:36 crc kubenswrapper[4996]: I0228 11:03:36.399230 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/must-gather-9fssl" event={"ID":"15c2b360-7bd2-47c0-80ab-15cca738eb8c","Type":"ContainerStarted","Data":"9b3bd39815897247e673c36e24516dde39433befe43c6189d14281d982ae752d"} Feb 28 11:03:43 crc kubenswrapper[4996]: I0228 11:03:43.492684 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/must-gather-9fssl" event={"ID":"15c2b360-7bd2-47c0-80ab-15cca738eb8c","Type":"ContainerStarted","Data":"50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d"} Feb 28 11:03:43 crc kubenswrapper[4996]: I0228 11:03:43.493450 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/must-gather-9fssl" event={"ID":"15c2b360-7bd2-47c0-80ab-15cca738eb8c","Type":"ContainerStarted","Data":"ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d"} Feb 28 11:03:43 crc kubenswrapper[4996]: I0228 11:03:43.521967 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9txfd/must-gather-9fssl" podStartSLOduration=2.621399769 podStartE2EDuration="8.521948856s" podCreationTimestamp="2026-02-28 11:03:35 +0000 UTC" firstStartedPulling="2026-02-28 11:03:36.369166684 +0000 UTC m=+7380.059969485" lastFinishedPulling="2026-02-28 11:03:42.269715761 +0000 UTC m=+7385.960518572" observedRunningTime="2026-02-28 11:03:43.51886456 +0000 UTC m=+7387.209667371" watchObservedRunningTime="2026-02-28 11:03:43.521948856 +0000 UTC m=+7387.212751667" Feb 28 11:03:47 crc kubenswrapper[4996]: E0228 11:03:47.273102 4996 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.9:52060->38.102.83.9:39449: read tcp 38.102.83.9:52060->38.102.83.9:39449: read: connection reset by peer Feb 28 11:03:47 crc kubenswrapper[4996]: E0228 11:03:47.273120 4996 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.9:52060->38.102.83.9:39449: write tcp 38.102.83.9:52060->38.102.83.9:39449: write: broken pipe Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.050855 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9txfd/crc-debug-fp7pd"] Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.052244 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.119725 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbdr6\" (UniqueName: \"kubernetes.io/projected/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-kube-api-access-wbdr6\") pod \"crc-debug-fp7pd\" (UID: \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\") " pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.119897 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-host\") pod \"crc-debug-fp7pd\" (UID: \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\") " pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.221879 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbdr6\" (UniqueName: \"kubernetes.io/projected/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-kube-api-access-wbdr6\") pod \"crc-debug-fp7pd\" (UID: \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\") " pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.222325 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-host\") pod \"crc-debug-fp7pd\" (UID: \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\") " pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.222427 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-host\") pod \"crc-debug-fp7pd\" (UID: \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\") " pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.248198 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbdr6\" (UniqueName: \"kubernetes.io/projected/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-kube-api-access-wbdr6\") pod \"crc-debug-fp7pd\" (UID: \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\") " pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.370584 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:03:48 crc kubenswrapper[4996]: I0228 11:03:48.562883 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/crc-debug-fp7pd" event={"ID":"4f0b663d-2f62-4bcd-8c6c-d67cc6016496","Type":"ContainerStarted","Data":"fea0396816faadb656fe289b1f2cbd0b40119c8463165237ce81b9882b7c2ff7"} Feb 28 11:03:49 crc kubenswrapper[4996]: I0228 11:03:49.034207 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:03:49 crc kubenswrapper[4996]: E0228 11:03:49.034417 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:03:59 crc kubenswrapper[4996]: I0228 11:03:59.685633 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/crc-debug-fp7pd" event={"ID":"4f0b663d-2f62-4bcd-8c6c-d67cc6016496","Type":"ContainerStarted","Data":"e26051b154981919154b3e6e7aa33ed23fccf55f40f8622bb64749b3c3c596e6"} Feb 28 11:03:59 crc kubenswrapper[4996]: I0228 11:03:59.709953 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9txfd/crc-debug-fp7pd" podStartSLOduration=0.756507974 podStartE2EDuration="11.709930176s" podCreationTimestamp="2026-02-28 11:03:48 +0000 UTC" firstStartedPulling="2026-02-28 11:03:48.414615196 +0000 UTC m=+7392.105418007" lastFinishedPulling="2026-02-28 11:03:59.368037398 +0000 UTC m=+7403.058840209" observedRunningTime="2026-02-28 11:03:59.703888058 +0000 UTC m=+7403.394690889" watchObservedRunningTime="2026-02-28 11:03:59.709930176 +0000 UTC m=+7403.400733017" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.138783 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537944-scnxv"] Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.140709 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537944-scnxv" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.144262 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.144368 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.144410 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.163624 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537944-scnxv"] Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.188335 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rzjm\" (UniqueName: \"kubernetes.io/projected/2067f086-a6e3-42f6-9207-48868646e826-kube-api-access-7rzjm\") pod \"auto-csr-approver-29537944-scnxv\" (UID: \"2067f086-a6e3-42f6-9207-48868646e826\") " pod="openshift-infra/auto-csr-approver-29537944-scnxv" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.291726 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rzjm\" (UniqueName: \"kubernetes.io/projected/2067f086-a6e3-42f6-9207-48868646e826-kube-api-access-7rzjm\") pod \"auto-csr-approver-29537944-scnxv\" (UID: \"2067f086-a6e3-42f6-9207-48868646e826\") " pod="openshift-infra/auto-csr-approver-29537944-scnxv" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.313539 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rzjm\" (UniqueName: \"kubernetes.io/projected/2067f086-a6e3-42f6-9207-48868646e826-kube-api-access-7rzjm\") pod \"auto-csr-approver-29537944-scnxv\" (UID: \"2067f086-a6e3-42f6-9207-48868646e826\") " pod="openshift-infra/auto-csr-approver-29537944-scnxv" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.497752 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537944-scnxv" Feb 28 11:04:00 crc kubenswrapper[4996]: I0228 11:04:00.963198 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537944-scnxv"] Feb 28 11:04:00 crc kubenswrapper[4996]: W0228 11:04:00.964590 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2067f086_a6e3_42f6_9207_48868646e826.slice/crio-d713ed4c2ed68aebd1ac63648fdb40f00bb89d5ff56c115db59343845e93c35e WatchSource:0}: Error finding container d713ed4c2ed68aebd1ac63648fdb40f00bb89d5ff56c115db59343845e93c35e: Status 404 returned error can't find the container with id d713ed4c2ed68aebd1ac63648fdb40f00bb89d5ff56c115db59343845e93c35e Feb 28 11:04:01 crc kubenswrapper[4996]: I0228 11:04:01.034033 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:04:01 crc kubenswrapper[4996]: E0228 11:04:01.034335 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:04:01 crc kubenswrapper[4996]: I0228 11:04:01.706087 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537944-scnxv" event={"ID":"2067f086-a6e3-42f6-9207-48868646e826","Type":"ContainerStarted","Data":"d713ed4c2ed68aebd1ac63648fdb40f00bb89d5ff56c115db59343845e93c35e"} Feb 28 11:04:02 crc kubenswrapper[4996]: I0228 11:04:02.716188 4996 generic.go:334] "Generic (PLEG): container finished" podID="2067f086-a6e3-42f6-9207-48868646e826" containerID="408f20dfd7b990c67ccee2c2dd26b710f25f8192eec4efebba66306be9e93c02" exitCode=0 Feb 28 11:04:02 crc kubenswrapper[4996]: I0228 11:04:02.716293 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537944-scnxv" event={"ID":"2067f086-a6e3-42f6-9207-48868646e826","Type":"ContainerDied","Data":"408f20dfd7b990c67ccee2c2dd26b710f25f8192eec4efebba66306be9e93c02"} Feb 28 11:04:04 crc kubenswrapper[4996]: I0228 11:04:04.111353 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537944-scnxv" Feb 28 11:04:04 crc kubenswrapper[4996]: I0228 11:04:04.170860 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rzjm\" (UniqueName: \"kubernetes.io/projected/2067f086-a6e3-42f6-9207-48868646e826-kube-api-access-7rzjm\") pod \"2067f086-a6e3-42f6-9207-48868646e826\" (UID: \"2067f086-a6e3-42f6-9207-48868646e826\") " Feb 28 11:04:04 crc kubenswrapper[4996]: I0228 11:04:04.177936 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2067f086-a6e3-42f6-9207-48868646e826-kube-api-access-7rzjm" (OuterVolumeSpecName: "kube-api-access-7rzjm") pod "2067f086-a6e3-42f6-9207-48868646e826" (UID: "2067f086-a6e3-42f6-9207-48868646e826"). InnerVolumeSpecName "kube-api-access-7rzjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:04:04 crc kubenswrapper[4996]: I0228 11:04:04.273101 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rzjm\" (UniqueName: \"kubernetes.io/projected/2067f086-a6e3-42f6-9207-48868646e826-kube-api-access-7rzjm\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:04 crc kubenswrapper[4996]: I0228 11:04:04.736664 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537944-scnxv" event={"ID":"2067f086-a6e3-42f6-9207-48868646e826","Type":"ContainerDied","Data":"d713ed4c2ed68aebd1ac63648fdb40f00bb89d5ff56c115db59343845e93c35e"} Feb 28 11:04:04 crc kubenswrapper[4996]: I0228 11:04:04.736708 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d713ed4c2ed68aebd1ac63648fdb40f00bb89d5ff56c115db59343845e93c35e" Feb 28 11:04:04 crc kubenswrapper[4996]: I0228 11:04:04.736714 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537944-scnxv" Feb 28 11:04:05 crc kubenswrapper[4996]: I0228 11:04:05.182209 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537938-8jb8v"] Feb 28 11:04:05 crc kubenswrapper[4996]: I0228 11:04:05.195974 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537938-8jb8v"] Feb 28 11:04:07 crc kubenswrapper[4996]: I0228 11:04:07.048947 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbc34363-0648-4598-b3e5-eb546f7d5bf2" path="/var/lib/kubelet/pods/cbc34363-0648-4598-b3e5-eb546f7d5bf2/volumes" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.034074 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:04:14 crc kubenswrapper[4996]: E0228 11:04:14.034874 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.297931 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qzdfv"] Feb 28 11:04:14 crc kubenswrapper[4996]: E0228 11:04:14.298432 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2067f086-a6e3-42f6-9207-48868646e826" containerName="oc" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.298458 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="2067f086-a6e3-42f6-9207-48868646e826" containerName="oc" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.298635 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="2067f086-a6e3-42f6-9207-48868646e826" containerName="oc" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.300151 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.341618 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzdfv"] Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.463415 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-catalog-content\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.463556 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzg54\" (UniqueName: \"kubernetes.io/projected/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-kube-api-access-bzg54\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.463608 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-utilities\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.564990 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-catalog-content\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.565132 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzg54\" (UniqueName: \"kubernetes.io/projected/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-kube-api-access-bzg54\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.565194 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-utilities\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.565567 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-catalog-content\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.565599 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-utilities\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.591391 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzg54\" (UniqueName: \"kubernetes.io/projected/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-kube-api-access-bzg54\") pod \"redhat-marketplace-qzdfv\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:14 crc kubenswrapper[4996]: I0228 11:04:14.616875 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:15 crc kubenswrapper[4996]: I0228 11:04:15.204923 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzdfv"] Feb 28 11:04:15 crc kubenswrapper[4996]: W0228 11:04:15.234898 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod770df5be_4fc9_45d9_ab9c_de4af2a47fbc.slice/crio-3c1b4ff7890fccd8a464cbde5f40e2abf12a86a5767ca878a4a5dd66d17e4979 WatchSource:0}: Error finding container 3c1b4ff7890fccd8a464cbde5f40e2abf12a86a5767ca878a4a5dd66d17e4979: Status 404 returned error can't find the container with id 3c1b4ff7890fccd8a464cbde5f40e2abf12a86a5767ca878a4a5dd66d17e4979 Feb 28 11:04:15 crc kubenswrapper[4996]: I0228 11:04:15.839192 4996 generic.go:334] "Generic (PLEG): container finished" podID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerID="54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371" exitCode=0 Feb 28 11:04:15 crc kubenswrapper[4996]: I0228 11:04:15.839243 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzdfv" event={"ID":"770df5be-4fc9-45d9-ab9c-de4af2a47fbc","Type":"ContainerDied","Data":"54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371"} Feb 28 11:04:15 crc kubenswrapper[4996]: I0228 11:04:15.839678 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzdfv" event={"ID":"770df5be-4fc9-45d9-ab9c-de4af2a47fbc","Type":"ContainerStarted","Data":"3c1b4ff7890fccd8a464cbde5f40e2abf12a86a5767ca878a4a5dd66d17e4979"} Feb 28 11:04:16 crc kubenswrapper[4996]: I0228 11:04:16.850308 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzdfv" event={"ID":"770df5be-4fc9-45d9-ab9c-de4af2a47fbc","Type":"ContainerStarted","Data":"cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641"} Feb 28 11:04:17 crc kubenswrapper[4996]: I0228 11:04:17.859778 4996 generic.go:334] "Generic (PLEG): container finished" podID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerID="cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641" exitCode=0 Feb 28 11:04:17 crc kubenswrapper[4996]: I0228 11:04:17.859837 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzdfv" event={"ID":"770df5be-4fc9-45d9-ab9c-de4af2a47fbc","Type":"ContainerDied","Data":"cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641"} Feb 28 11:04:17 crc kubenswrapper[4996]: I0228 11:04:17.862292 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 11:04:18 crc kubenswrapper[4996]: I0228 11:04:18.874370 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzdfv" event={"ID":"770df5be-4fc9-45d9-ab9c-de4af2a47fbc","Type":"ContainerStarted","Data":"3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d"} Feb 28 11:04:18 crc kubenswrapper[4996]: I0228 11:04:18.893353 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qzdfv" podStartSLOduration=2.47178986 podStartE2EDuration="4.893336907s" podCreationTimestamp="2026-02-28 11:04:14 +0000 UTC" firstStartedPulling="2026-02-28 11:04:15.841857634 +0000 UTC m=+7419.532660445" lastFinishedPulling="2026-02-28 11:04:18.263404681 +0000 UTC m=+7421.954207492" observedRunningTime="2026-02-28 11:04:18.889209306 +0000 UTC m=+7422.580012117" watchObservedRunningTime="2026-02-28 11:04:18.893336907 +0000 UTC m=+7422.584139718" Feb 28 11:04:24 crc kubenswrapper[4996]: I0228 11:04:24.617188 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:24 crc kubenswrapper[4996]: I0228 11:04:24.617779 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:24 crc kubenswrapper[4996]: I0228 11:04:24.676420 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:24 crc kubenswrapper[4996]: I0228 11:04:24.979394 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:25 crc kubenswrapper[4996]: I0228 11:04:25.058050 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzdfv"] Feb 28 11:04:26 crc kubenswrapper[4996]: I0228 11:04:26.942867 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qzdfv" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerName="registry-server" containerID="cri-o://3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d" gracePeriod=2 Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.429891 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.612776 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-utilities\") pod \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.612853 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-catalog-content\") pod \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.612904 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzg54\" (UniqueName: \"kubernetes.io/projected/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-kube-api-access-bzg54\") pod \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\" (UID: \"770df5be-4fc9-45d9-ab9c-de4af2a47fbc\") " Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.615026 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-utilities" (OuterVolumeSpecName: "utilities") pod "770df5be-4fc9-45d9-ab9c-de4af2a47fbc" (UID: "770df5be-4fc9-45d9-ab9c-de4af2a47fbc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.632854 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-kube-api-access-bzg54" (OuterVolumeSpecName: "kube-api-access-bzg54") pod "770df5be-4fc9-45d9-ab9c-de4af2a47fbc" (UID: "770df5be-4fc9-45d9-ab9c-de4af2a47fbc"). InnerVolumeSpecName "kube-api-access-bzg54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.640928 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "770df5be-4fc9-45d9-ab9c-de4af2a47fbc" (UID: "770df5be-4fc9-45d9-ab9c-de4af2a47fbc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.715782 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.716015 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.716108 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzg54\" (UniqueName: \"kubernetes.io/projected/770df5be-4fc9-45d9-ab9c-de4af2a47fbc-kube-api-access-bzg54\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.952617 4996 generic.go:334] "Generic (PLEG): container finished" podID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerID="3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d" exitCode=0 Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.952663 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzdfv" event={"ID":"770df5be-4fc9-45d9-ab9c-de4af2a47fbc","Type":"ContainerDied","Data":"3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d"} Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.952676 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzdfv" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.952690 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzdfv" event={"ID":"770df5be-4fc9-45d9-ab9c-de4af2a47fbc","Type":"ContainerDied","Data":"3c1b4ff7890fccd8a464cbde5f40e2abf12a86a5767ca878a4a5dd66d17e4979"} Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.952708 4996 scope.go:117] "RemoveContainer" containerID="3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.975158 4996 scope.go:117] "RemoveContainer" containerID="cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641" Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.989525 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzdfv"] Feb 28 11:04:27 crc kubenswrapper[4996]: I0228 11:04:27.998711 4996 scope.go:117] "RemoveContainer" containerID="54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371" Feb 28 11:04:28 crc kubenswrapper[4996]: I0228 11:04:28.001840 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzdfv"] Feb 28 11:04:28 crc kubenswrapper[4996]: I0228 11:04:28.033062 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:04:28 crc kubenswrapper[4996]: E0228 11:04:28.033421 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:04:28 crc kubenswrapper[4996]: I0228 11:04:28.048570 4996 scope.go:117] "RemoveContainer" containerID="3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d" Feb 28 11:04:28 crc kubenswrapper[4996]: E0228 11:04:28.051202 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d\": container with ID starting with 3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d not found: ID does not exist" containerID="3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d" Feb 28 11:04:28 crc kubenswrapper[4996]: I0228 11:04:28.051329 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d"} err="failed to get container status \"3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d\": rpc error: code = NotFound desc = could not find container \"3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d\": container with ID starting with 3e38f6552dd1e27b83af0761aaeb78a88d41c28f6e90ae8c517677e05d99292d not found: ID does not exist" Feb 28 11:04:28 crc kubenswrapper[4996]: I0228 11:04:28.051361 4996 scope.go:117] "RemoveContainer" containerID="cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641" Feb 28 11:04:28 crc kubenswrapper[4996]: E0228 11:04:28.051714 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641\": container with ID starting with cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641 not found: ID does not exist" containerID="cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641" Feb 28 11:04:28 crc kubenswrapper[4996]: I0228 11:04:28.051769 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641"} err="failed to get container status \"cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641\": rpc error: code = NotFound desc = could not find container \"cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641\": container with ID starting with cd4ff5c679511acdbd03b5fbe8238ffdc56268643c5b060ea7218cefed65a641 not found: ID does not exist" Feb 28 11:04:28 crc kubenswrapper[4996]: I0228 11:04:28.051788 4996 scope.go:117] "RemoveContainer" containerID="54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371" Feb 28 11:04:28 crc kubenswrapper[4996]: E0228 11:04:28.052068 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371\": container with ID starting with 54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371 not found: ID does not exist" containerID="54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371" Feb 28 11:04:28 crc kubenswrapper[4996]: I0228 11:04:28.052121 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371"} err="failed to get container status \"54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371\": rpc error: code = NotFound desc = could not find container \"54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371\": container with ID starting with 54d08286c3195a22a0fb2d7368f7b24669d998cd14d874ff8fd314c39d3f6371 not found: ID does not exist" Feb 28 11:04:29 crc kubenswrapper[4996]: I0228 11:04:29.054707 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" path="/var/lib/kubelet/pods/770df5be-4fc9-45d9-ab9c-de4af2a47fbc/volumes" Feb 28 11:04:38 crc kubenswrapper[4996]: E0228 11:04:38.033244 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:04:41 crc kubenswrapper[4996]: I0228 11:04:41.037263 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:04:41 crc kubenswrapper[4996]: E0228 11:04:41.038245 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.547187 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ft9t4"] Feb 28 11:04:45 crc kubenswrapper[4996]: E0228 11:04:45.549486 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerName="registry-server" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.549596 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerName="registry-server" Feb 28 11:04:45 crc kubenswrapper[4996]: E0228 11:04:45.549703 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerName="extract-content" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.549780 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerName="extract-content" Feb 28 11:04:45 crc kubenswrapper[4996]: E0228 11:04:45.549881 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerName="extract-utilities" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.549960 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerName="extract-utilities" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.550314 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="770df5be-4fc9-45d9-ab9c-de4af2a47fbc" containerName="registry-server" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.552224 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.576205 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vhrm\" (UniqueName: \"kubernetes.io/projected/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-kube-api-access-6vhrm\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.576341 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-utilities\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.576442 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-catalog-content\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.591042 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ft9t4"] Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.679287 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vhrm\" (UniqueName: \"kubernetes.io/projected/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-kube-api-access-6vhrm\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.679374 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-utilities\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.679433 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-catalog-content\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.679923 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-utilities\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.680072 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-catalog-content\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.704707 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vhrm\" (UniqueName: \"kubernetes.io/projected/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-kube-api-access-6vhrm\") pod \"redhat-operators-ft9t4\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:45 crc kubenswrapper[4996]: I0228 11:04:45.904636 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:04:46 crc kubenswrapper[4996]: I0228 11:04:46.411447 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ft9t4"] Feb 28 11:04:46 crc kubenswrapper[4996]: E0228 11:04:46.884709 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd24c5d9d_c5fe_4c75_b8e9_c887873829d5.slice/crio-conmon-728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd24c5d9d_c5fe_4c75_b8e9_c887873829d5.slice/crio-728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f0b663d_2f62_4bcd_8c6c_d67cc6016496.slice/crio-conmon-e26051b154981919154b3e6e7aa33ed23fccf55f40f8622bb64749b3c3c596e6.scope\": RecentStats: unable to find data in memory cache]" Feb 28 11:04:47 crc kubenswrapper[4996]: I0228 11:04:47.151397 4996 generic.go:334] "Generic (PLEG): container finished" podID="4f0b663d-2f62-4bcd-8c6c-d67cc6016496" containerID="e26051b154981919154b3e6e7aa33ed23fccf55f40f8622bb64749b3c3c596e6" exitCode=0 Feb 28 11:04:47 crc kubenswrapper[4996]: I0228 11:04:47.151493 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/crc-debug-fp7pd" event={"ID":"4f0b663d-2f62-4bcd-8c6c-d67cc6016496","Type":"ContainerDied","Data":"e26051b154981919154b3e6e7aa33ed23fccf55f40f8622bb64749b3c3c596e6"} Feb 28 11:04:47 crc kubenswrapper[4996]: I0228 11:04:47.153446 4996 generic.go:334] "Generic (PLEG): container finished" podID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerID="728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b" exitCode=0 Feb 28 11:04:47 crc kubenswrapper[4996]: I0228 11:04:47.153502 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft9t4" event={"ID":"d24c5d9d-c5fe-4c75-b8e9-c887873829d5","Type":"ContainerDied","Data":"728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b"} Feb 28 11:04:47 crc kubenswrapper[4996]: I0228 11:04:47.153529 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft9t4" event={"ID":"d24c5d9d-c5fe-4c75-b8e9-c887873829d5","Type":"ContainerStarted","Data":"2415039b7d44c65a55bb16e7e9b6353867bc67fc945895130af6babec29eb2e1"} Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.317628 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.329606 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-host\") pod \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\" (UID: \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\") " Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.329702 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-host" (OuterVolumeSpecName: "host") pod "4f0b663d-2f62-4bcd-8c6c-d67cc6016496" (UID: "4f0b663d-2f62-4bcd-8c6c-d67cc6016496"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.329726 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbdr6\" (UniqueName: \"kubernetes.io/projected/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-kube-api-access-wbdr6\") pod \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\" (UID: \"4f0b663d-2f62-4bcd-8c6c-d67cc6016496\") " Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.330213 4996 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-host\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.343540 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-kube-api-access-wbdr6" (OuterVolumeSpecName: "kube-api-access-wbdr6") pod "4f0b663d-2f62-4bcd-8c6c-d67cc6016496" (UID: "4f0b663d-2f62-4bcd-8c6c-d67cc6016496"). InnerVolumeSpecName "kube-api-access-wbdr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.360093 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9txfd/crc-debug-fp7pd"] Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.368984 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9txfd/crc-debug-fp7pd"] Feb 28 11:04:48 crc kubenswrapper[4996]: I0228 11:04:48.432545 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbdr6\" (UniqueName: \"kubernetes.io/projected/4f0b663d-2f62-4bcd-8c6c-d67cc6016496-kube-api-access-wbdr6\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.055967 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f0b663d-2f62-4bcd-8c6c-d67cc6016496" path="/var/lib/kubelet/pods/4f0b663d-2f62-4bcd-8c6c-d67cc6016496/volumes" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.174455 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft9t4" event={"ID":"d24c5d9d-c5fe-4c75-b8e9-c887873829d5","Type":"ContainerStarted","Data":"2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3"} Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.176435 4996 scope.go:117] "RemoveContainer" containerID="e26051b154981919154b3e6e7aa33ed23fccf55f40f8622bb64749b3c3c596e6" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.176531 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-fp7pd" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.516798 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9txfd/crc-debug-9d2xt"] Feb 28 11:04:49 crc kubenswrapper[4996]: E0228 11:04:49.517191 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f0b663d-2f62-4bcd-8c6c-d67cc6016496" containerName="container-00" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.517205 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f0b663d-2f62-4bcd-8c6c-d67cc6016496" containerName="container-00" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.517448 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f0b663d-2f62-4bcd-8c6c-d67cc6016496" containerName="container-00" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.518068 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.555125 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e3305e4-b632-4c5c-85b4-c8e519894eb0-host\") pod \"crc-debug-9d2xt\" (UID: \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\") " pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.555217 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44t8t\" (UniqueName: \"kubernetes.io/projected/1e3305e4-b632-4c5c-85b4-c8e519894eb0-kube-api-access-44t8t\") pod \"crc-debug-9d2xt\" (UID: \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\") " pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.657642 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44t8t\" (UniqueName: \"kubernetes.io/projected/1e3305e4-b632-4c5c-85b4-c8e519894eb0-kube-api-access-44t8t\") pod \"crc-debug-9d2xt\" (UID: \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\") " pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.657876 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e3305e4-b632-4c5c-85b4-c8e519894eb0-host\") pod \"crc-debug-9d2xt\" (UID: \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\") " pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.657933 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e3305e4-b632-4c5c-85b4-c8e519894eb0-host\") pod \"crc-debug-9d2xt\" (UID: \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\") " pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.680993 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44t8t\" (UniqueName: \"kubernetes.io/projected/1e3305e4-b632-4c5c-85b4-c8e519894eb0-kube-api-access-44t8t\") pod \"crc-debug-9d2xt\" (UID: \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\") " pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:49 crc kubenswrapper[4996]: I0228 11:04:49.835539 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:49 crc kubenswrapper[4996]: W0228 11:04:49.861676 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e3305e4_b632_4c5c_85b4_c8e519894eb0.slice/crio-3961bdf60653583a153907f10efb8c72021fb1d38d1685b177e452f8424d36a8 WatchSource:0}: Error finding container 3961bdf60653583a153907f10efb8c72021fb1d38d1685b177e452f8424d36a8: Status 404 returned error can't find the container with id 3961bdf60653583a153907f10efb8c72021fb1d38d1685b177e452f8424d36a8 Feb 28 11:04:50 crc kubenswrapper[4996]: I0228 11:04:50.187126 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/crc-debug-9d2xt" event={"ID":"1e3305e4-b632-4c5c-85b4-c8e519894eb0","Type":"ContainerStarted","Data":"da26fc985dda4d3e201ceb34ce0c0f3a701bd5e0a6e32a927cfd4a620053c167"} Feb 28 11:04:50 crc kubenswrapper[4996]: I0228 11:04:50.187457 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/crc-debug-9d2xt" event={"ID":"1e3305e4-b632-4c5c-85b4-c8e519894eb0","Type":"ContainerStarted","Data":"3961bdf60653583a153907f10efb8c72021fb1d38d1685b177e452f8424d36a8"} Feb 28 11:04:50 crc kubenswrapper[4996]: I0228 11:04:50.206984 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-9txfd/crc-debug-9d2xt" podStartSLOduration=1.206967902 podStartE2EDuration="1.206967902s" podCreationTimestamp="2026-02-28 11:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 11:04:50.204196894 +0000 UTC m=+7453.894999735" watchObservedRunningTime="2026-02-28 11:04:50.206967902 +0000 UTC m=+7453.897770713" Feb 28 11:04:51 crc kubenswrapper[4996]: I0228 11:04:51.187679 4996 scope.go:117] "RemoveContainer" containerID="3b5932a235a3cb5a50566b06215a0ab9c9b1502dfb526fc41b3b6c04f45a0d66" Feb 28 11:04:51 crc kubenswrapper[4996]: I0228 11:04:51.199517 4996 generic.go:334] "Generic (PLEG): container finished" podID="1e3305e4-b632-4c5c-85b4-c8e519894eb0" containerID="da26fc985dda4d3e201ceb34ce0c0f3a701bd5e0a6e32a927cfd4a620053c167" exitCode=0 Feb 28 11:04:51 crc kubenswrapper[4996]: I0228 11:04:51.199924 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/crc-debug-9d2xt" event={"ID":"1e3305e4-b632-4c5c-85b4-c8e519894eb0","Type":"ContainerDied","Data":"da26fc985dda4d3e201ceb34ce0c0f3a701bd5e0a6e32a927cfd4a620053c167"} Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.305293 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.407692 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44t8t\" (UniqueName: \"kubernetes.io/projected/1e3305e4-b632-4c5c-85b4-c8e519894eb0-kube-api-access-44t8t\") pod \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\" (UID: \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\") " Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.407879 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e3305e4-b632-4c5c-85b4-c8e519894eb0-host\") pod \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\" (UID: \"1e3305e4-b632-4c5c-85b4-c8e519894eb0\") " Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.407924 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3305e4-b632-4c5c-85b4-c8e519894eb0-host" (OuterVolumeSpecName: "host") pod "1e3305e4-b632-4c5c-85b4-c8e519894eb0" (UID: "1e3305e4-b632-4c5c-85b4-c8e519894eb0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.408289 4996 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1e3305e4-b632-4c5c-85b4-c8e519894eb0-host\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.438089 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3305e4-b632-4c5c-85b4-c8e519894eb0-kube-api-access-44t8t" (OuterVolumeSpecName: "kube-api-access-44t8t") pod "1e3305e4-b632-4c5c-85b4-c8e519894eb0" (UID: "1e3305e4-b632-4c5c-85b4-c8e519894eb0"). InnerVolumeSpecName "kube-api-access-44t8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.509647 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44t8t\" (UniqueName: \"kubernetes.io/projected/1e3305e4-b632-4c5c-85b4-c8e519894eb0-kube-api-access-44t8t\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.967363 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9txfd/crc-debug-9d2xt"] Feb 28 11:04:52 crc kubenswrapper[4996]: I0228 11:04:52.980434 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9txfd/crc-debug-9d2xt"] Feb 28 11:04:53 crc kubenswrapper[4996]: I0228 11:04:53.056570 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3305e4-b632-4c5c-85b4-c8e519894eb0" path="/var/lib/kubelet/pods/1e3305e4-b632-4c5c-85b4-c8e519894eb0/volumes" Feb 28 11:04:53 crc kubenswrapper[4996]: I0228 11:04:53.226108 4996 scope.go:117] "RemoveContainer" containerID="da26fc985dda4d3e201ceb34ce0c0f3a701bd5e0a6e32a927cfd4a620053c167" Feb 28 11:04:53 crc kubenswrapper[4996]: I0228 11:04:53.226177 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-9d2xt" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.175677 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-9txfd/crc-debug-kpx6s"] Feb 28 11:04:54 crc kubenswrapper[4996]: E0228 11:04:54.176412 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e3305e4-b632-4c5c-85b4-c8e519894eb0" containerName="container-00" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.176429 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e3305e4-b632-4c5c-85b4-c8e519894eb0" containerName="container-00" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.176647 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e3305e4-b632-4c5c-85b4-c8e519894eb0" containerName="container-00" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.177829 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.242643 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aff0f09-f19b-4da0-addf-02d3c09d04df-host\") pod \"crc-debug-kpx6s\" (UID: \"8aff0f09-f19b-4da0-addf-02d3c09d04df\") " pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.242835 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d95wt\" (UniqueName: \"kubernetes.io/projected/8aff0f09-f19b-4da0-addf-02d3c09d04df-kube-api-access-d95wt\") pod \"crc-debug-kpx6s\" (UID: \"8aff0f09-f19b-4da0-addf-02d3c09d04df\") " pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.344604 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d95wt\" (UniqueName: \"kubernetes.io/projected/8aff0f09-f19b-4da0-addf-02d3c09d04df-kube-api-access-d95wt\") pod \"crc-debug-kpx6s\" (UID: \"8aff0f09-f19b-4da0-addf-02d3c09d04df\") " pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.344753 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aff0f09-f19b-4da0-addf-02d3c09d04df-host\") pod \"crc-debug-kpx6s\" (UID: \"8aff0f09-f19b-4da0-addf-02d3c09d04df\") " pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.344910 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aff0f09-f19b-4da0-addf-02d3c09d04df-host\") pod \"crc-debug-kpx6s\" (UID: \"8aff0f09-f19b-4da0-addf-02d3c09d04df\") " pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.362879 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d95wt\" (UniqueName: \"kubernetes.io/projected/8aff0f09-f19b-4da0-addf-02d3c09d04df-kube-api-access-d95wt\") pod \"crc-debug-kpx6s\" (UID: \"8aff0f09-f19b-4da0-addf-02d3c09d04df\") " pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:54 crc kubenswrapper[4996]: I0228 11:04:54.494655 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:54 crc kubenswrapper[4996]: W0228 11:04:54.524042 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8aff0f09_f19b_4da0_addf_02d3c09d04df.slice/crio-a07c9f58adb9f8c14bb92a0558eb4c04ea2998959e3743a2d4f2e4a82b537845 WatchSource:0}: Error finding container a07c9f58adb9f8c14bb92a0558eb4c04ea2998959e3743a2d4f2e4a82b537845: Status 404 returned error can't find the container with id a07c9f58adb9f8c14bb92a0558eb4c04ea2998959e3743a2d4f2e4a82b537845 Feb 28 11:04:55 crc kubenswrapper[4996]: I0228 11:04:55.251672 4996 generic.go:334] "Generic (PLEG): container finished" podID="8aff0f09-f19b-4da0-addf-02d3c09d04df" containerID="e9bb046647a6fb07329df1bfd0f1915f86d7432ee06669933fbdfefc1fa778c2" exitCode=0 Feb 28 11:04:55 crc kubenswrapper[4996]: I0228 11:04:55.252033 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/crc-debug-kpx6s" event={"ID":"8aff0f09-f19b-4da0-addf-02d3c09d04df","Type":"ContainerDied","Data":"e9bb046647a6fb07329df1bfd0f1915f86d7432ee06669933fbdfefc1fa778c2"} Feb 28 11:04:55 crc kubenswrapper[4996]: I0228 11:04:55.252061 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/crc-debug-kpx6s" event={"ID":"8aff0f09-f19b-4da0-addf-02d3c09d04df","Type":"ContainerStarted","Data":"a07c9f58adb9f8c14bb92a0558eb4c04ea2998959e3743a2d4f2e4a82b537845"} Feb 28 11:04:55 crc kubenswrapper[4996]: I0228 11:04:55.254382 4996 generic.go:334] "Generic (PLEG): container finished" podID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerID="2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3" exitCode=0 Feb 28 11:04:55 crc kubenswrapper[4996]: I0228 11:04:55.254421 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft9t4" event={"ID":"d24c5d9d-c5fe-4c75-b8e9-c887873829d5","Type":"ContainerDied","Data":"2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3"} Feb 28 11:04:55 crc kubenswrapper[4996]: I0228 11:04:55.302356 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9txfd/crc-debug-kpx6s"] Feb 28 11:04:55 crc kubenswrapper[4996]: I0228 11:04:55.324164 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9txfd/crc-debug-kpx6s"] Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.033985 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:04:56 crc kubenswrapper[4996]: E0228 11:04:56.034601 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.264156 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft9t4" event={"ID":"d24c5d9d-c5fe-4c75-b8e9-c887873829d5","Type":"ContainerStarted","Data":"8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc"} Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.301543 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ft9t4" podStartSLOduration=2.816008495 podStartE2EDuration="11.301521934s" podCreationTimestamp="2026-02-28 11:04:45 +0000 UTC" firstStartedPulling="2026-02-28 11:04:47.155714574 +0000 UTC m=+7450.846517385" lastFinishedPulling="2026-02-28 11:04:55.641227983 +0000 UTC m=+7459.332030824" observedRunningTime="2026-02-28 11:04:56.291753256 +0000 UTC m=+7459.982556067" watchObservedRunningTime="2026-02-28 11:04:56.301521934 +0000 UTC m=+7459.992324755" Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.378935 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.386245 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aff0f09-f19b-4da0-addf-02d3c09d04df-host\") pod \"8aff0f09-f19b-4da0-addf-02d3c09d04df\" (UID: \"8aff0f09-f19b-4da0-addf-02d3c09d04df\") " Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.386359 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d95wt\" (UniqueName: \"kubernetes.io/projected/8aff0f09-f19b-4da0-addf-02d3c09d04df-kube-api-access-d95wt\") pod \"8aff0f09-f19b-4da0-addf-02d3c09d04df\" (UID: \"8aff0f09-f19b-4da0-addf-02d3c09d04df\") " Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.386353 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8aff0f09-f19b-4da0-addf-02d3c09d04df-host" (OuterVolumeSpecName: "host") pod "8aff0f09-f19b-4da0-addf-02d3c09d04df" (UID: "8aff0f09-f19b-4da0-addf-02d3c09d04df"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.386806 4996 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8aff0f09-f19b-4da0-addf-02d3c09d04df-host\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.391258 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aff0f09-f19b-4da0-addf-02d3c09d04df-kube-api-access-d95wt" (OuterVolumeSpecName: "kube-api-access-d95wt") pod "8aff0f09-f19b-4da0-addf-02d3c09d04df" (UID: "8aff0f09-f19b-4da0-addf-02d3c09d04df"). InnerVolumeSpecName "kube-api-access-d95wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:04:56 crc kubenswrapper[4996]: I0228 11:04:56.489624 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d95wt\" (UniqueName: \"kubernetes.io/projected/8aff0f09-f19b-4da0-addf-02d3c09d04df-kube-api-access-d95wt\") on node \"crc\" DevicePath \"\"" Feb 28 11:04:57 crc kubenswrapper[4996]: I0228 11:04:57.046924 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aff0f09-f19b-4da0-addf-02d3c09d04df" path="/var/lib/kubelet/pods/8aff0f09-f19b-4da0-addf-02d3c09d04df/volumes" Feb 28 11:04:57 crc kubenswrapper[4996]: I0228 11:04:57.275763 4996 scope.go:117] "RemoveContainer" containerID="e9bb046647a6fb07329df1bfd0f1915f86d7432ee06669933fbdfefc1fa778c2" Feb 28 11:04:57 crc kubenswrapper[4996]: I0228 11:04:57.275801 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/crc-debug-kpx6s" Feb 28 11:05:05 crc kubenswrapper[4996]: I0228 11:05:05.906192 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:05:05 crc kubenswrapper[4996]: I0228 11:05:05.906837 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:05:05 crc kubenswrapper[4996]: I0228 11:05:05.967561 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:05:06 crc kubenswrapper[4996]: I0228 11:05:06.399898 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:05:06 crc kubenswrapper[4996]: I0228 11:05:06.465407 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ft9t4"] Feb 28 11:05:08 crc kubenswrapper[4996]: I0228 11:05:08.376564 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ft9t4" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerName="registry-server" containerID="cri-o://8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc" gracePeriod=2 Feb 28 11:05:08 crc kubenswrapper[4996]: I0228 11:05:08.921126 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.036721 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:05:09 crc kubenswrapper[4996]: E0228 11:05:09.037131 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.058268 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vhrm\" (UniqueName: \"kubernetes.io/projected/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-kube-api-access-6vhrm\") pod \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.058358 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-utilities\") pod \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.058591 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-catalog-content\") pod \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\" (UID: \"d24c5d9d-c5fe-4c75-b8e9-c887873829d5\") " Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.061823 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-utilities" (OuterVolumeSpecName: "utilities") pod "d24c5d9d-c5fe-4c75-b8e9-c887873829d5" (UID: "d24c5d9d-c5fe-4c75-b8e9-c887873829d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.071580 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-kube-api-access-6vhrm" (OuterVolumeSpecName: "kube-api-access-6vhrm") pod "d24c5d9d-c5fe-4c75-b8e9-c887873829d5" (UID: "d24c5d9d-c5fe-4c75-b8e9-c887873829d5"). InnerVolumeSpecName "kube-api-access-6vhrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.161952 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vhrm\" (UniqueName: \"kubernetes.io/projected/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-kube-api-access-6vhrm\") on node \"crc\" DevicePath \"\"" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.161994 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.181695 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d24c5d9d-c5fe-4c75-b8e9-c887873829d5" (UID: "d24c5d9d-c5fe-4c75-b8e9-c887873829d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.263741 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d24c5d9d-c5fe-4c75-b8e9-c887873829d5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.387467 4996 generic.go:334] "Generic (PLEG): container finished" podID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerID="8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc" exitCode=0 Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.387531 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft9t4" event={"ID":"d24c5d9d-c5fe-4c75-b8e9-c887873829d5","Type":"ContainerDied","Data":"8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc"} Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.387563 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft9t4" event={"ID":"d24c5d9d-c5fe-4c75-b8e9-c887873829d5","Type":"ContainerDied","Data":"2415039b7d44c65a55bb16e7e9b6353867bc67fc945895130af6babec29eb2e1"} Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.387582 4996 scope.go:117] "RemoveContainer" containerID="8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.387718 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft9t4" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.425591 4996 scope.go:117] "RemoveContainer" containerID="2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.433387 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ft9t4"] Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.445333 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ft9t4"] Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.454322 4996 scope.go:117] "RemoveContainer" containerID="728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.500112 4996 scope.go:117] "RemoveContainer" containerID="8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc" Feb 28 11:05:09 crc kubenswrapper[4996]: E0228 11:05:09.500571 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc\": container with ID starting with 8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc not found: ID does not exist" containerID="8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.500603 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc"} err="failed to get container status \"8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc\": rpc error: code = NotFound desc = could not find container \"8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc\": container with ID starting with 8c2f5bc4a8c1926b3b0fb8da676cd597a1075ca419860b169dd237e86aa551fc not found: ID does not exist" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.500625 4996 scope.go:117] "RemoveContainer" containerID="2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3" Feb 28 11:05:09 crc kubenswrapper[4996]: E0228 11:05:09.501020 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3\": container with ID starting with 2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3 not found: ID does not exist" containerID="2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.501082 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3"} err="failed to get container status \"2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3\": rpc error: code = NotFound desc = could not find container \"2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3\": container with ID starting with 2421fe9e4a38e1cdca30853b5a734612678c7da91e2e00676f57f2ab1759eae3 not found: ID does not exist" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.501117 4996 scope.go:117] "RemoveContainer" containerID="728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b" Feb 28 11:05:09 crc kubenswrapper[4996]: E0228 11:05:09.501444 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b\": container with ID starting with 728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b not found: ID does not exist" containerID="728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b" Feb 28 11:05:09 crc kubenswrapper[4996]: I0228 11:05:09.501474 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b"} err="failed to get container status \"728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b\": rpc error: code = NotFound desc = could not find container \"728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b\": container with ID starting with 728c34d52da5d241269963b6f6d0840b973698f860f8d5a140bf8924af4e772b not found: ID does not exist" Feb 28 11:05:11 crc kubenswrapper[4996]: I0228 11:05:11.045094 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" path="/var/lib/kubelet/pods/d24c5d9d-c5fe-4c75-b8e9-c887873829d5/volumes" Feb 28 11:05:20 crc kubenswrapper[4996]: I0228 11:05:20.033388 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:05:20 crc kubenswrapper[4996]: I0228 11:05:20.500145 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"a51d8d511d6b011a2a3ec4c88be404ecf9e6fca10cd03fdce244834cf9c24eb2"} Feb 28 11:05:35 crc kubenswrapper[4996]: I0228 11:05:35.650606 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_f6886487-0ab2-404d-aa70-4be59320885a/ansibletest-ansibletest/0.log" Feb 28 11:05:35 crc kubenswrapper[4996]: I0228 11:05:35.830687 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58dc667d-krgck_ae98f057-6852-4905-a4d6-5b6d121cb4a1/barbican-api/0.log" Feb 28 11:05:35 crc kubenswrapper[4996]: I0228 11:05:35.894382 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58dc667d-krgck_ae98f057-6852-4905-a4d6-5b6d121cb4a1/barbican-api-log/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.024119 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7457c94496-jvp8b_d4d19959-1945-43a8-b005-f4f136fcdf10/barbican-keystone-listener/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.130546 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-665ffb6dc-7h4gw_85fcdb36-feb8-4f2f-a91e-ffbce6e91d04/barbican-worker/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.292886 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-665ffb6dc-7h4gw_85fcdb36-feb8-4f2f-a91e-ffbce6e91d04/barbican-worker-log/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.436093 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75_d0a3e2dd-04e0-4625-b69c-6fddf875deeb/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.585025 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7457c94496-jvp8b_d4d19959-1945-43a8-b005-f4f136fcdf10/barbican-keystone-listener-log/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.587141 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_84f19b5a-912c-4a5d-a7f7-05d8a637bc1c/ceilometer-central-agent/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.632993 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_84f19b5a-912c-4a5d-a7f7-05d8a637bc1c/ceilometer-notification-agent/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.748129 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_84f19b5a-912c-4a5d-a7f7-05d8a637bc1c/sg-core/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.789820 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_84f19b5a-912c-4a5d-a7f7-05d8a637bc1c/proxy-httpd/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.839764 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh_f0ea0b93-3364-4191-b14e-6ad457132874/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:36 crc kubenswrapper[4996]: I0228 11:05:36.983897 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9_4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.167904 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f13ff650-58de-4d3f-a56b-f77ef33ddf89/cinder-api/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.200737 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f13ff650-58de-4d3f-a56b-f77ef33ddf89/cinder-api-log/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.377207 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_16eb7691-5159-4f12-88d5-79d8e9b902b2/cinder-backup/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.433426 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a57ea6ee-2619-4875-96e6-60622a9754d3/cinder-scheduler/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.483343 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_16eb7691-5159-4f12-88d5-79d8e9b902b2/probe/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.657958 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a57ea6ee-2619-4875-96e6-60622a9754d3/probe/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.756781 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_d40d2784-2f7e-4cde-bb71-ff077d54ea57/cinder-volume/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.758496 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_d40d2784-2f7e-4cde-bb71-ff077d54ea57/probe/0.log" Feb 28 11:05:37 crc kubenswrapper[4996]: I0228 11:05:37.949905 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7qknl_a8f28eab-0652-46d4-817f-9b48a6f71e4a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.000655 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-64g89_1fbdc43d-b502-4eca-9040-604271ec1f6e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.129567 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-t5mnf_3bea2fd5-b365-4936-a700-6810be669d7b/init/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.301122 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-t5mnf_3bea2fd5-b365-4936-a700-6810be669d7b/init/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.371408 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d7bb241a-bbf4-499a-b203-d51d32c8964d/glance-httpd/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.543098 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-t5mnf_3bea2fd5-b365-4936-a700-6810be669d7b/dnsmasq-dns/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.551539 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d7bb241a-bbf4-499a-b203-d51d32c8964d/glance-log/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.669446 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6608d2cf-7157-45c2-9a82-99354bf88cee/glance-httpd/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.768203 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6608d2cf-7157-45c2-9a82-99354bf88cee/glance-log/0.log" Feb 28 11:05:38 crc kubenswrapper[4996]: I0228 11:05:38.883454 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6ccc6bcbc4-2fmz9_b605afa6-a344-45f0-b62a-56f46b346c52/horizon/0.log" Feb 28 11:05:39 crc kubenswrapper[4996]: I0228 11:05:39.048849 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_915895e5-31ba-450f-b3e8-a385e5937353/horizontest-tests-horizontest/0.log" Feb 28 11:05:39 crc kubenswrapper[4996]: I0228 11:05:39.287121 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v_645c2ca2-c74e-44d7-a0e7-6f161b14aa55/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:39 crc kubenswrapper[4996]: I0228 11:05:39.414722 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ghlpr_500ae8f8-17b1-45fb-9569-d49fd19cdea6/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:39 crc kubenswrapper[4996]: I0228 11:05:39.862934 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29537881-447jn_d637ef52-36d0-4c60-8bef-201d71cac614/keystone-cron/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.072593 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29537941-pj4nm_9acff40e-9809-41c5-b307-388aa1a815d2/keystone-cron/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.168169 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6ccc6bcbc4-2fmz9_b605afa6-a344-45f0-b62a-56f46b346c52/horizon-log/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.222253 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_93e23f7f-31d2-496c-898d-4f46db4da6cc/kube-state-metrics/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.398555 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l_f0393bfd-0a6b-48e8-8ccb-45ec21b73b58/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.538851 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7f342b89-e95f-4811-a844-690bb97b8b32/manila-api-log/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.629931 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7f342b89-e95f-4811-a844-690bb97b8b32/manila-api/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.762143 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_bb3beaae-37f6-4cbd-af32-919a3b9df37e/probe/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.814280 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_bb3beaae-37f6-4cbd-af32-919a3b9df37e/manila-scheduler/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.964527 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_bd1af971-3595-4d44-98b7-8878b4d13222/probe/0.log" Feb 28 11:05:40 crc kubenswrapper[4996]: I0228 11:05:40.964738 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_bd1af971-3595-4d44-98b7-8878b4d13222/manila-share/0.log" Feb 28 11:05:41 crc kubenswrapper[4996]: I0228 11:05:41.642874 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt_b17e3d39-7e71-472f-9011-d825c77b005a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:42 crc kubenswrapper[4996]: I0228 11:05:42.356342 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-644f7b559b-gngw5_736c34b0-e2b3-4d08-be5b-53491a475d18/keystone-api/0.log" Feb 28 11:05:42 crc kubenswrapper[4996]: I0228 11:05:42.466845 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-679bcc7697-9hs5j_50e72561-9c77-43f9-8f8d-0c9be05be3f6/neutron-httpd/0.log" Feb 28 11:05:43 crc kubenswrapper[4996]: I0228 11:05:43.330612 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-679bcc7697-9hs5j_50e72561-9c77-43f9-8f8d-0c9be05be3f6/neutron-api/0.log" Feb 28 11:05:43 crc kubenswrapper[4996]: I0228 11:05:43.533739 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d65ff4ec-036e-4680-8a41-9941e185fc14/nova-cell0-conductor-conductor/0.log" Feb 28 11:05:43 crc kubenswrapper[4996]: I0228 11:05:43.890205 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_09bc4f70-3953-4e3d-a6b0-60905a719e37/nova-cell1-conductor-conductor/0.log" Feb 28 11:05:44 crc kubenswrapper[4996]: I0228 11:05:44.204353 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_28a8ab76-f177-47a0-8b6c-9f8c75739b30/nova-cell1-novncproxy-novncproxy/0.log" Feb 28 11:05:44 crc kubenswrapper[4996]: I0228 11:05:44.438396 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm_1d56c0f7-03f9-4035-b2d2-ef6d77821940/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:44 crc kubenswrapper[4996]: I0228 11:05:44.758934 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3eb16fcc-5ac7-437e-bca5-e82873599fac/nova-metadata-log/0.log" Feb 28 11:05:45 crc kubenswrapper[4996]: I0228 11:05:45.942281 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_594a5261-8810-4189-9140-39d0fc645c6e/nova-scheduler-scheduler/0.log" Feb 28 11:05:46 crc kubenswrapper[4996]: I0228 11:05:46.361450 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25c48fac-9425-4af6-aa7d-6b2c2428ef2d/mysql-bootstrap/0.log" Feb 28 11:05:46 crc kubenswrapper[4996]: I0228 11:05:46.480437 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_07191c7b-ef05-4fca-ab52-6df77fc1b92a/nova-api-log/0.log" Feb 28 11:05:46 crc kubenswrapper[4996]: I0228 11:05:46.581617 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25c48fac-9425-4af6-aa7d-6b2c2428ef2d/mysql-bootstrap/0.log" Feb 28 11:05:46 crc kubenswrapper[4996]: I0228 11:05:46.684188 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25c48fac-9425-4af6-aa7d-6b2c2428ef2d/galera/0.log" Feb 28 11:05:46 crc kubenswrapper[4996]: I0228 11:05:46.855374 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cce18b01-6974-43c9-86e2-564a4024564b/mysql-bootstrap/0.log" Feb 28 11:05:47 crc kubenswrapper[4996]: I0228 11:05:47.057577 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cce18b01-6974-43c9-86e2-564a4024564b/mysql-bootstrap/0.log" Feb 28 11:05:47 crc kubenswrapper[4996]: I0228 11:05:47.116026 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cce18b01-6974-43c9-86e2-564a4024564b/galera/0.log" Feb 28 11:05:47 crc kubenswrapper[4996]: I0228 11:05:47.304175 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_07191c7b-ef05-4fca-ab52-6df77fc1b92a/nova-api-api/0.log" Feb 28 11:05:47 crc kubenswrapper[4996]: I0228 11:05:47.325377 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ea848d22-46ca-46ec-a5e7-5b26014b569b/openstackclient/0.log" Feb 28 11:05:47 crc kubenswrapper[4996]: I0228 11:05:47.471557 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6rm4w_ab34e1ca-2f20-4604-85fa-ca92e0a1ce68/ovn-controller/0.log" Feb 28 11:05:47 crc kubenswrapper[4996]: I0228 11:05:47.695246 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5hxhx_1edf409f-42c6-4e00-bf2e-6cd81644033a/openstack-network-exporter/0.log" Feb 28 11:05:47 crc kubenswrapper[4996]: I0228 11:05:47.818363 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lm47_664813b7-20c4-40e4-b4a8-9beacfb177fa/ovsdb-server-init/0.log" Feb 28 11:05:47 crc kubenswrapper[4996]: I0228 11:05:47.972858 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lm47_664813b7-20c4-40e4-b4a8-9beacfb177fa/ovsdb-server-init/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.030763 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lm47_664813b7-20c4-40e4-b4a8-9beacfb177fa/ovs-vswitchd/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.070901 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lm47_664813b7-20c4-40e4-b4a8-9beacfb177fa/ovsdb-server/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.248214 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-f8v97_688e7207-5681-405d-9548-9c8d753b28e1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.423252 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c38b2e2f-cb15-44c6-b4d9-1b9d80c57045/openstack-network-exporter/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.441517 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c38b2e2f-cb15-44c6-b4d9-1b9d80c57045/ovn-northd/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.562079 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3eb16fcc-5ac7-437e-bca5-e82873599fac/nova-metadata-metadata/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.637649 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0b250a70-da80-4cf5-842b-3a4897a4cbc8/openstack-network-exporter/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.650621 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0b250a70-da80-4cf5-842b-3a4897a4cbc8/ovsdbserver-nb/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.766716 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e/openstack-network-exporter/0.log" Feb 28 11:05:48 crc kubenswrapper[4996]: I0228 11:05:48.828815 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e/ovsdbserver-sb/0.log" Feb 28 11:05:49 crc kubenswrapper[4996]: I0228 11:05:49.373163 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_77efb507-fab5-4164-8cd8-576b15f4d6f8/setup-container/0.log" Feb 28 11:05:49 crc kubenswrapper[4996]: I0228 11:05:49.548204 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_77efb507-fab5-4164-8cd8-576b15f4d6f8/setup-container/0.log" Feb 28 11:05:49 crc kubenswrapper[4996]: I0228 11:05:49.572504 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_77efb507-fab5-4164-8cd8-576b15f4d6f8/rabbitmq/0.log" Feb 28 11:05:49 crc kubenswrapper[4996]: I0228 11:05:49.732154 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-594c4f7c44-lnbrv_d17faf34-1a55-4544-8da0-2b15159ff1d6/placement-api/0.log" Feb 28 11:05:49 crc kubenswrapper[4996]: I0228 11:05:49.774807 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f61ee28f-ef2a-45ee-9832-57559af20a84/setup-container/0.log" Feb 28 11:05:49 crc kubenswrapper[4996]: I0228 11:05:49.888773 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-594c4f7c44-lnbrv_d17faf34-1a55-4544-8da0-2b15159ff1d6/placement-log/0.log" Feb 28 11:05:49 crc kubenswrapper[4996]: I0228 11:05:49.961910 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f61ee28f-ef2a-45ee-9832-57559af20a84/setup-container/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.054119 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f61ee28f-ef2a-45ee-9832-57559af20a84/rabbitmq/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.126945 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f_b2cd442e-b51b-41cc-a664-fead95314ada/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.273082 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf_6dbca8bf-95da-4cd5-b57e-d810e5f39ae6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.410123 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-k5hwc_edad127b-e6c2-4b27-add0-60234ee9f1cb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.491380 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-npr5w_eacfea11-3471-48df-a164-22a498aa7574/ssh-known-hosts-edpm-deployment/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.771058 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_5f62e7a0-18c6-441e-8804-4760a6dd1efc/tempest-tests-tempest-tests-runner/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.821154 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_af137d17-a90e-42ea-8e73-3dba0196c670/tempest-tests-tempest-tests-runner/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.968993 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_502af5eb-df11-47d8-b386-7c8dc19e280c/test-operator-logs-container/0.log" Feb 28 11:05:50 crc kubenswrapper[4996]: I0228 11:05:50.984245 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_c025b832-dde4-4bf3-ada7-0a882c92dd0b/test-operator-logs-container/0.log" Feb 28 11:05:51 crc kubenswrapper[4996]: I0228 11:05:51.168854 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_f52349d5-5dab-4972-bdb2-835cb675071f/test-operator-logs-container/0.log" Feb 28 11:05:51 crc kubenswrapper[4996]: I0228 11:05:51.314431 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_f624dd26-b398-4f25-b94e-74a5560432a8/test-operator-logs-container/0.log" Feb 28 11:05:51 crc kubenswrapper[4996]: I0228 11:05:51.406327 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979/tobiko-tests-tobiko/0.log" Feb 28 11:05:51 crc kubenswrapper[4996]: I0228 11:05:51.575825 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_e548eb85-b67c-4520-80ff-88f65e118673/tobiko-tests-tobiko/0.log" Feb 28 11:05:51 crc kubenswrapper[4996]: I0228 11:05:51.678142 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6sltr_4b3851de-b4b3-497e-9b3d-d56d55e05792/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:05:54 crc kubenswrapper[4996]: I0228 11:05:54.617227 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3f427004-3205-42d2-86db-84131a0d2ab7/memcached/0.log" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.142588 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537946-8l8sc"] Feb 28 11:06:00 crc kubenswrapper[4996]: E0228 11:06:00.143536 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerName="extract-utilities" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.143552 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerName="extract-utilities" Feb 28 11:06:00 crc kubenswrapper[4996]: E0228 11:06:00.143566 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerName="extract-content" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.143573 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerName="extract-content" Feb 28 11:06:00 crc kubenswrapper[4996]: E0228 11:06:00.143588 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerName="registry-server" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.143594 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerName="registry-server" Feb 28 11:06:00 crc kubenswrapper[4996]: E0228 11:06:00.143620 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aff0f09-f19b-4da0-addf-02d3c09d04df" containerName="container-00" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.143625 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aff0f09-f19b-4da0-addf-02d3c09d04df" containerName="container-00" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.143793 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="d24c5d9d-c5fe-4c75-b8e9-c887873829d5" containerName="registry-server" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.143805 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aff0f09-f19b-4da0-addf-02d3c09d04df" containerName="container-00" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.144427 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537946-8l8sc" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.152620 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.152651 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.153020 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.156658 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537946-8l8sc"] Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.249037 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdxgg\" (UniqueName: \"kubernetes.io/projected/ca61316d-5a0f-4fd2-8705-be0d1c6c4617-kube-api-access-xdxgg\") pod \"auto-csr-approver-29537946-8l8sc\" (UID: \"ca61316d-5a0f-4fd2-8705-be0d1c6c4617\") " pod="openshift-infra/auto-csr-approver-29537946-8l8sc" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.351415 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdxgg\" (UniqueName: \"kubernetes.io/projected/ca61316d-5a0f-4fd2-8705-be0d1c6c4617-kube-api-access-xdxgg\") pod \"auto-csr-approver-29537946-8l8sc\" (UID: \"ca61316d-5a0f-4fd2-8705-be0d1c6c4617\") " pod="openshift-infra/auto-csr-approver-29537946-8l8sc" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.371530 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdxgg\" (UniqueName: \"kubernetes.io/projected/ca61316d-5a0f-4fd2-8705-be0d1c6c4617-kube-api-access-xdxgg\") pod \"auto-csr-approver-29537946-8l8sc\" (UID: \"ca61316d-5a0f-4fd2-8705-be0d1c6c4617\") " pod="openshift-infra/auto-csr-approver-29537946-8l8sc" Feb 28 11:06:00 crc kubenswrapper[4996]: I0228 11:06:00.484439 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537946-8l8sc" Feb 28 11:06:01 crc kubenswrapper[4996]: I0228 11:06:01.026060 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537946-8l8sc"] Feb 28 11:06:01 crc kubenswrapper[4996]: I0228 11:06:01.859045 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537946-8l8sc" event={"ID":"ca61316d-5a0f-4fd2-8705-be0d1c6c4617","Type":"ContainerStarted","Data":"88a5074d9bef91fc6dc72e4f3a53e2fd8148d779224ef22ab99bb9a729698e60"} Feb 28 11:06:02 crc kubenswrapper[4996]: I0228 11:06:02.867791 4996 generic.go:334] "Generic (PLEG): container finished" podID="ca61316d-5a0f-4fd2-8705-be0d1c6c4617" containerID="0278c260e74182027bb6ca62196b0e80491c541f36d4aad1e86355731123dc8a" exitCode=0 Feb 28 11:06:02 crc kubenswrapper[4996]: I0228 11:06:02.867854 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537946-8l8sc" event={"ID":"ca61316d-5a0f-4fd2-8705-be0d1c6c4617","Type":"ContainerDied","Data":"0278c260e74182027bb6ca62196b0e80491c541f36d4aad1e86355731123dc8a"} Feb 28 11:06:04 crc kubenswrapper[4996]: I0228 11:06:04.349860 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537946-8l8sc" Feb 28 11:06:04 crc kubenswrapper[4996]: I0228 11:06:04.435080 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdxgg\" (UniqueName: \"kubernetes.io/projected/ca61316d-5a0f-4fd2-8705-be0d1c6c4617-kube-api-access-xdxgg\") pod \"ca61316d-5a0f-4fd2-8705-be0d1c6c4617\" (UID: \"ca61316d-5a0f-4fd2-8705-be0d1c6c4617\") " Feb 28 11:06:04 crc kubenswrapper[4996]: I0228 11:06:04.441909 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca61316d-5a0f-4fd2-8705-be0d1c6c4617-kube-api-access-xdxgg" (OuterVolumeSpecName: "kube-api-access-xdxgg") pod "ca61316d-5a0f-4fd2-8705-be0d1c6c4617" (UID: "ca61316d-5a0f-4fd2-8705-be0d1c6c4617"). InnerVolumeSpecName "kube-api-access-xdxgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:06:04 crc kubenswrapper[4996]: I0228 11:06:04.539680 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdxgg\" (UniqueName: \"kubernetes.io/projected/ca61316d-5a0f-4fd2-8705-be0d1c6c4617-kube-api-access-xdxgg\") on node \"crc\" DevicePath \"\"" Feb 28 11:06:04 crc kubenswrapper[4996]: I0228 11:06:04.885557 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537946-8l8sc" event={"ID":"ca61316d-5a0f-4fd2-8705-be0d1c6c4617","Type":"ContainerDied","Data":"88a5074d9bef91fc6dc72e4f3a53e2fd8148d779224ef22ab99bb9a729698e60"} Feb 28 11:06:04 crc kubenswrapper[4996]: I0228 11:06:04.885850 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a5074d9bef91fc6dc72e4f3a53e2fd8148d779224ef22ab99bb9a729698e60" Feb 28 11:06:04 crc kubenswrapper[4996]: I0228 11:06:04.885662 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537946-8l8sc" Feb 28 11:06:05 crc kubenswrapper[4996]: I0228 11:06:05.423329 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537940-pw9mp"] Feb 28 11:06:05 crc kubenswrapper[4996]: I0228 11:06:05.431155 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537940-pw9mp"] Feb 28 11:06:07 crc kubenswrapper[4996]: E0228 11:06:07.040569 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:06:07 crc kubenswrapper[4996]: I0228 11:06:07.049702 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b07b45-80e5-48e7-a828-37120ebe17c4" path="/var/lib/kubelet/pods/a9b07b45-80e5-48e7-a828-37120ebe17c4/volumes" Feb 28 11:06:15 crc kubenswrapper[4996]: I0228 11:06:15.256147 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/util/0.log" Feb 28 11:06:15 crc kubenswrapper[4996]: I0228 11:06:15.452718 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/util/0.log" Feb 28 11:06:15 crc kubenswrapper[4996]: I0228 11:06:15.503898 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/pull/0.log" Feb 28 11:06:15 crc kubenswrapper[4996]: I0228 11:06:15.530229 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/pull/0.log" Feb 28 11:06:15 crc kubenswrapper[4996]: I0228 11:06:15.774181 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/util/0.log" Feb 28 11:06:15 crc kubenswrapper[4996]: I0228 11:06:15.774951 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/pull/0.log" Feb 28 11:06:15 crc kubenswrapper[4996]: I0228 11:06:15.796194 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/extract/0.log" Feb 28 11:06:16 crc kubenswrapper[4996]: I0228 11:06:16.278259 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-bk2ml_996ef81c-b994-461d-a9e0-ec61f8fe65f3/manager/0.log" Feb 28 11:06:16 crc kubenswrapper[4996]: I0228 11:06:16.615815 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-2bv7j_325efd0b-ff17-4ea1-a1d5-c12576259ce5/manager/0.log" Feb 28 11:06:16 crc kubenswrapper[4996]: I0228 11:06:16.742773 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-4zmmg_9b610086-19c9-4e01-8c4e-dcf6660d749e/manager/0.log" Feb 28 11:06:16 crc kubenswrapper[4996]: I0228 11:06:16.983227 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-7cdhw_cb2d53e3-ca80-4c1c-8d0b-02caeb753792/manager/0.log" Feb 28 11:06:17 crc kubenswrapper[4996]: I0228 11:06:17.596720 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-6kcm4_07d97cb5-6c6a-4d30-9454-8c13b5fc9adc/manager/0.log" Feb 28 11:06:17 crc kubenswrapper[4996]: I0228 11:06:17.778777 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-x6bwt_c8adb771-c22d-4f69-90a5-61cd4a36b618/manager/0.log" Feb 28 11:06:18 crc kubenswrapper[4996]: I0228 11:06:18.278065 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-clblk_5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3/manager/0.log" Feb 28 11:06:18 crc kubenswrapper[4996]: I0228 11:06:18.595125 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-pp79l_9d5331d1-e5df-4b2c-8663-5fe6afc00995/manager/0.log" Feb 28 11:06:18 crc kubenswrapper[4996]: I0228 11:06:18.689855 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-qs4sw_93f353d8-bbaa-4ec1-b816-d23d58c05ee1/manager/0.log" Feb 28 11:06:19 crc kubenswrapper[4996]: I0228 11:06:19.005078 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-zmcjg_de3f6975-8417-4db2-9d04-5364f4127334/manager/0.log" Feb 28 11:06:19 crc kubenswrapper[4996]: I0228 11:06:19.171096 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-kx6ht_84adbefa-8503-41bf-8b9b-662b08251cff/manager/0.log" Feb 28 11:06:19 crc kubenswrapper[4996]: I0228 11:06:19.384199 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-tdl5w_bd32451f-7a7d-429f-906f-d98e355c1abf/manager/0.log" Feb 28 11:06:19 crc kubenswrapper[4996]: I0228 11:06:19.390435 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-d5qml_a4e35f97-45f5-457f-bc93-86536fcbee68/manager/0.log" Feb 28 11:06:19 crc kubenswrapper[4996]: I0228 11:06:19.689068 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4_2de35814-cd78-4178-8b32-1fbd89de94b4/manager/0.log" Feb 28 11:06:20 crc kubenswrapper[4996]: I0228 11:06:20.172876 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-646b94fdfc-pc26j_e8534bde-79ad-4654-8a2b-8fa14ee7266b/operator/0.log" Feb 28 11:06:20 crc kubenswrapper[4996]: I0228 11:06:20.452325 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6cj4k_55ef28a1-cfbb-4a48-8b2c-c3f0784976fd/registry-server/0.log" Feb 28 11:06:20 crc kubenswrapper[4996]: I0228 11:06:20.978783 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-wwzjf_8a886fa9-0abd-4197-9a18-09f20f403ef4/manager/0.log" Feb 28 11:06:22 crc kubenswrapper[4996]: I0228 11:06:22.098246 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65cbf4f977-dh2cm_143a07a9-b2e4-4b4b-9328-a3feee140c26/manager/0.log" Feb 28 11:06:22 crc kubenswrapper[4996]: I0228 11:06:22.211363 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-q59cv_3a0ccc77-1ced-4c14-a1ac-18523be0afd4/manager/0.log" Feb 28 11:06:22 crc kubenswrapper[4996]: I0228 11:06:22.820059 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tn67k_36f7fdcf-d295-4ee0-9155-fbd3dc0d1234/operator/0.log" Feb 28 11:06:22 crc kubenswrapper[4996]: I0228 11:06:22.912641 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-c9v26_b503546b-54b6-4133-8d44-6a162ef54232/manager/0.log" Feb 28 11:06:23 crc kubenswrapper[4996]: I0228 11:06:23.195833 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-655d95ddc7-xxt4d_e4770c19-1759-4f93-88ea-696d28d6b149/manager/0.log" Feb 28 11:06:23 crc kubenswrapper[4996]: I0228 11:06:23.208192 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-f88sn_2d7f8619-4576-4fb4-83e1-73ebe232a06d/manager/0.log" Feb 28 11:06:23 crc kubenswrapper[4996]: I0228 11:06:23.372382 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-qzzwv_b1a21e4c-eb15-4914-9366-45a0bc6f2e3d/manager/0.log" Feb 28 11:06:29 crc kubenswrapper[4996]: I0228 11:06:29.733136 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-pppmx_9369ade1-1b2d-45cf-b376-1963d785be5c/manager/0.log" Feb 28 11:06:45 crc kubenswrapper[4996]: I0228 11:06:45.921839 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bhdsm_3f55132b-9e49-49fb-9043-aa56c455ea0f/control-plane-machine-set-operator/0.log" Feb 28 11:06:46 crc kubenswrapper[4996]: I0228 11:06:46.128891 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xd5f6_f51a22df-16fd-4f58-85dd-af4d0fc97752/kube-rbac-proxy/0.log" Feb 28 11:06:46 crc kubenswrapper[4996]: I0228 11:06:46.156912 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xd5f6_f51a22df-16fd-4f58-85dd-af4d0fc97752/machine-api-operator/0.log" Feb 28 11:06:51 crc kubenswrapper[4996]: I0228 11:06:51.368164 4996 scope.go:117] "RemoveContainer" containerID="5d755368c2f7db3484adbf0950b50baaf29bae4abe28f0d2ab5cb3eef531741d" Feb 28 11:06:59 crc kubenswrapper[4996]: I0228 11:06:59.128295 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-z92ks_e5269cb9-2bff-4476-92a8-fc85304fe923/cert-manager-controller/0.log" Feb 28 11:06:59 crc kubenswrapper[4996]: I0228 11:06:59.314439 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-fmb5g_6305a0f6-5022-49e7-b7a3-e41862e0bfbc/cert-manager-cainjector/0.log" Feb 28 11:06:59 crc kubenswrapper[4996]: I0228 11:06:59.369337 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wl22t_7db94663-acd6-4e4c-a203-2cec2afad8da/cert-manager-webhook/0.log" Feb 28 11:07:12 crc kubenswrapper[4996]: I0228 11:07:12.470110 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-8nvql_24a4ec72-da59-4afb-93a8-07f88c99753f/nmstate-console-plugin/0.log" Feb 28 11:07:12 crc kubenswrapper[4996]: I0228 11:07:12.643718 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jmt82_ce37c1b2-44ca-4001-b29b-518b02279f50/nmstate-handler/0.log" Feb 28 11:07:12 crc kubenswrapper[4996]: I0228 11:07:12.676225 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-97f4k_6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd/kube-rbac-proxy/0.log" Feb 28 11:07:12 crc kubenswrapper[4996]: I0228 11:07:12.713978 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-97f4k_6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd/nmstate-metrics/0.log" Feb 28 11:07:12 crc kubenswrapper[4996]: I0228 11:07:12.851918 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-s685x_b1a6c76e-564f-4300-ab4b-001eade60a3c/nmstate-operator/0.log" Feb 28 11:07:12 crc kubenswrapper[4996]: I0228 11:07:12.902506 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-tbjkf_a536c0f7-7da6-4af1-91a2-78ef301ca956/nmstate-webhook/0.log" Feb 28 11:07:17 crc kubenswrapper[4996]: E0228 11:07:17.040472 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:07:39 crc kubenswrapper[4996]: I0228 11:07:39.914975 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-ndmxj_2c5f5c9c-0220-40e1-9180-424aa6b0b104/kube-rbac-proxy/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.114920 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-frr-files/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.120262 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-ndmxj_2c5f5c9c-0220-40e1-9180-424aa6b0b104/controller/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.297638 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-reloader/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.315207 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-frr-files/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.339557 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-metrics/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.376122 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-reloader/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.508514 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-reloader/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.518600 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-metrics/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.524893 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-frr-files/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.572374 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-metrics/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.777144 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-frr-files/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.777166 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-metrics/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.795360 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/controller/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.800610 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-reloader/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.965785 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/frr-metrics/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.978714 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/kube-rbac-proxy-frr/0.log" Feb 28 11:07:40 crc kubenswrapper[4996]: I0228 11:07:40.990255 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/kube-rbac-proxy/0.log" Feb 28 11:07:41 crc kubenswrapper[4996]: I0228 11:07:41.143982 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/reloader/0.log" Feb 28 11:07:41 crc kubenswrapper[4996]: I0228 11:07:41.204597 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-9dp7d_1e4f759a-a03c-45a7-b736-776f1556c2f5/frr-k8s-webhook-server/0.log" Feb 28 11:07:41 crc kubenswrapper[4996]: I0228 11:07:41.449050 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b55d58fc7-vm879_0e8f07d7-a80e-4587-979f-26d28ce2bf2f/manager/0.log" Feb 28 11:07:41 crc kubenswrapper[4996]: I0228 11:07:41.557717 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6f44cf5f86-2slf9_5146aecb-1f48-48a2-ae75-5289e11c2c06/webhook-server/0.log" Feb 28 11:07:41 crc kubenswrapper[4996]: I0228 11:07:41.698834 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w5v6p_3f229baa-c709-4168-b123-25ee77a6f4c0/kube-rbac-proxy/0.log" Feb 28 11:07:42 crc kubenswrapper[4996]: I0228 11:07:42.248331 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:07:42 crc kubenswrapper[4996]: I0228 11:07:42.248400 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:07:42 crc kubenswrapper[4996]: I0228 11:07:42.519170 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w5v6p_3f229baa-c709-4168-b123-25ee77a6f4c0/speaker/0.log" Feb 28 11:07:43 crc kubenswrapper[4996]: I0228 11:07:43.528433 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/frr/0.log" Feb 28 11:07:54 crc kubenswrapper[4996]: I0228 11:07:54.827196 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/util/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.087211 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/util/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.126211 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/pull/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.144042 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/pull/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.275369 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/util/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.302909 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/extract/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.338258 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/pull/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.472265 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-utilities/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.658413 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-content/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.661099 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-content/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.690398 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-utilities/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.832485 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-content/0.log" Feb 28 11:07:55 crc kubenswrapper[4996]: I0228 11:07:55.834464 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-utilities/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.070359 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-utilities/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.261733 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-utilities/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.277464 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-content/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.335823 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-content/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.485625 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-content/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.503127 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-utilities/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.802398 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/util/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.992458 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/util/0.log" Feb 28 11:07:56 crc kubenswrapper[4996]: I0228 11:07:56.995042 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/pull/0.log" Feb 28 11:07:57 crc kubenswrapper[4996]: I0228 11:07:57.207632 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/pull/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.102766 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/util/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.123863 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/pull/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.137920 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/registry-server/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.292099 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/extract/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.424919 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vtmgq_cee24e37-cdd8-4423-831e-8c13e1f30c37/marketplace-operator/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.565266 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-utilities/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.810078 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-content/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.816512 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-utilities/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.881279 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/registry-server/0.log" Feb 28 11:07:58 crc kubenswrapper[4996]: I0228 11:07:58.885711 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-content/0.log" Feb 28 11:07:59 crc kubenswrapper[4996]: I0228 11:07:59.086938 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-utilities/0.log" Feb 28 11:07:59 crc kubenswrapper[4996]: I0228 11:07:59.106276 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-content/0.log" Feb 28 11:07:59 crc kubenswrapper[4996]: I0228 11:07:59.380771 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-content/0.log" Feb 28 11:07:59 crc kubenswrapper[4996]: I0228 11:07:59.407215 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-utilities/0.log" Feb 28 11:07:59 crc kubenswrapper[4996]: I0228 11:07:59.411312 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-content/0.log" Feb 28 11:07:59 crc kubenswrapper[4996]: I0228 11:07:59.547718 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-utilities/0.log" Feb 28 11:07:59 crc kubenswrapper[4996]: I0228 11:07:59.575157 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-content/0.log" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.109530 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-utilities/0.log" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.145169 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537948-b8qbz"] Feb 28 11:08:00 crc kubenswrapper[4996]: E0228 11:08:00.145612 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca61316d-5a0f-4fd2-8705-be0d1c6c4617" containerName="oc" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.145629 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca61316d-5a0f-4fd2-8705-be0d1c6c4617" containerName="oc" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.145843 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca61316d-5a0f-4fd2-8705-be0d1c6c4617" containerName="oc" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.146591 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537948-b8qbz" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.149441 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.149608 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.151082 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.157368 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537948-b8qbz"] Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.300416 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrfjd\" (UniqueName: \"kubernetes.io/projected/78877f16-1062-4eaf-8166-9aafef3b38b5-kube-api-access-lrfjd\") pod \"auto-csr-approver-29537948-b8qbz\" (UID: \"78877f16-1062-4eaf-8166-9aafef3b38b5\") " pod="openshift-infra/auto-csr-approver-29537948-b8qbz" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.402085 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrfjd\" (UniqueName: \"kubernetes.io/projected/78877f16-1062-4eaf-8166-9aafef3b38b5-kube-api-access-lrfjd\") pod \"auto-csr-approver-29537948-b8qbz\" (UID: \"78877f16-1062-4eaf-8166-9aafef3b38b5\") " pod="openshift-infra/auto-csr-approver-29537948-b8qbz" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.402227 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/registry-server/0.log" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.422494 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrfjd\" (UniqueName: \"kubernetes.io/projected/78877f16-1062-4eaf-8166-9aafef3b38b5-kube-api-access-lrfjd\") pod \"auto-csr-approver-29537948-b8qbz\" (UID: \"78877f16-1062-4eaf-8166-9aafef3b38b5\") " pod="openshift-infra/auto-csr-approver-29537948-b8qbz" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.467351 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537948-b8qbz" Feb 28 11:08:00 crc kubenswrapper[4996]: I0228 11:08:00.937138 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537948-b8qbz"] Feb 28 11:08:01 crc kubenswrapper[4996]: I0228 11:08:01.235243 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/registry-server/0.log" Feb 28 11:08:01 crc kubenswrapper[4996]: I0228 11:08:01.933052 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537948-b8qbz" event={"ID":"78877f16-1062-4eaf-8166-9aafef3b38b5","Type":"ContainerStarted","Data":"7955ac7f2aedc899b4456007fa281d3c8ba6aa0359641c229b42552991172065"} Feb 28 11:08:02 crc kubenswrapper[4996]: I0228 11:08:02.942514 4996 generic.go:334] "Generic (PLEG): container finished" podID="78877f16-1062-4eaf-8166-9aafef3b38b5" containerID="8aa9aee2e281bfeafeeb191772e8ea57d526454e797e82c491b2b05dc99f2b2f" exitCode=0 Feb 28 11:08:02 crc kubenswrapper[4996]: I0228 11:08:02.942576 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537948-b8qbz" event={"ID":"78877f16-1062-4eaf-8166-9aafef3b38b5","Type":"ContainerDied","Data":"8aa9aee2e281bfeafeeb191772e8ea57d526454e797e82c491b2b05dc99f2b2f"} Feb 28 11:08:04 crc kubenswrapper[4996]: I0228 11:08:04.355861 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537948-b8qbz" Feb 28 11:08:04 crc kubenswrapper[4996]: I0228 11:08:04.483614 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrfjd\" (UniqueName: \"kubernetes.io/projected/78877f16-1062-4eaf-8166-9aafef3b38b5-kube-api-access-lrfjd\") pod \"78877f16-1062-4eaf-8166-9aafef3b38b5\" (UID: \"78877f16-1062-4eaf-8166-9aafef3b38b5\") " Feb 28 11:08:04 crc kubenswrapper[4996]: I0228 11:08:04.489615 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78877f16-1062-4eaf-8166-9aafef3b38b5-kube-api-access-lrfjd" (OuterVolumeSpecName: "kube-api-access-lrfjd") pod "78877f16-1062-4eaf-8166-9aafef3b38b5" (UID: "78877f16-1062-4eaf-8166-9aafef3b38b5"). InnerVolumeSpecName "kube-api-access-lrfjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:08:04 crc kubenswrapper[4996]: I0228 11:08:04.586253 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrfjd\" (UniqueName: \"kubernetes.io/projected/78877f16-1062-4eaf-8166-9aafef3b38b5-kube-api-access-lrfjd\") on node \"crc\" DevicePath \"\"" Feb 28 11:08:04 crc kubenswrapper[4996]: I0228 11:08:04.965617 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537948-b8qbz" event={"ID":"78877f16-1062-4eaf-8166-9aafef3b38b5","Type":"ContainerDied","Data":"7955ac7f2aedc899b4456007fa281d3c8ba6aa0359641c229b42552991172065"} Feb 28 11:08:04 crc kubenswrapper[4996]: I0228 11:08:04.965918 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7955ac7f2aedc899b4456007fa281d3c8ba6aa0359641c229b42552991172065" Feb 28 11:08:04 crc kubenswrapper[4996]: I0228 11:08:04.965789 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537948-b8qbz" Feb 28 11:08:05 crc kubenswrapper[4996]: I0228 11:08:05.418017 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537942-fwrs7"] Feb 28 11:08:05 crc kubenswrapper[4996]: I0228 11:08:05.426263 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537942-fwrs7"] Feb 28 11:08:07 crc kubenswrapper[4996]: I0228 11:08:07.044796 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b24cca9-b985-4d62-bcf6-6f5327a53251" path="/var/lib/kubelet/pods/3b24cca9-b985-4d62-bcf6-6f5327a53251/volumes" Feb 28 11:08:12 crc kubenswrapper[4996]: I0228 11:08:12.249315 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:08:12 crc kubenswrapper[4996]: I0228 11:08:12.249910 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:08:41 crc kubenswrapper[4996]: E0228 11:08:41.033714 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:08:42 crc kubenswrapper[4996]: I0228 11:08:42.248814 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:08:42 crc kubenswrapper[4996]: I0228 11:08:42.248877 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:08:42 crc kubenswrapper[4996]: I0228 11:08:42.248926 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 11:08:42 crc kubenswrapper[4996]: I0228 11:08:42.249724 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a51d8d511d6b011a2a3ec4c88be404ecf9e6fca10cd03fdce244834cf9c24eb2"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 11:08:42 crc kubenswrapper[4996]: I0228 11:08:42.249796 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://a51d8d511d6b011a2a3ec4c88be404ecf9e6fca10cd03fdce244834cf9c24eb2" gracePeriod=600 Feb 28 11:08:43 crc kubenswrapper[4996]: I0228 11:08:43.331522 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="a51d8d511d6b011a2a3ec4c88be404ecf9e6fca10cd03fdce244834cf9c24eb2" exitCode=0 Feb 28 11:08:43 crc kubenswrapper[4996]: I0228 11:08:43.331675 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"a51d8d511d6b011a2a3ec4c88be404ecf9e6fca10cd03fdce244834cf9c24eb2"} Feb 28 11:08:43 crc kubenswrapper[4996]: I0228 11:08:43.332263 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa"} Feb 28 11:08:43 crc kubenswrapper[4996]: I0228 11:08:43.332296 4996 scope.go:117] "RemoveContainer" containerID="73e7ca811bac7c08287a4dd44c7497890aa2a7931b1a2171030f9028d30f3d9e" Feb 28 11:08:51 crc kubenswrapper[4996]: I0228 11:08:51.472279 4996 scope.go:117] "RemoveContainer" containerID="7aa6890b8748cc6822c6a7ce338b3177e9dd678c8644750602f61854c46c059a" Feb 28 11:10:00 crc kubenswrapper[4996]: E0228 11:10:00.033271 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.139397 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537950-wspwd"] Feb 28 11:10:00 crc kubenswrapper[4996]: E0228 11:10:00.139910 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78877f16-1062-4eaf-8166-9aafef3b38b5" containerName="oc" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.139931 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="78877f16-1062-4eaf-8166-9aafef3b38b5" containerName="oc" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.140201 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="78877f16-1062-4eaf-8166-9aafef3b38b5" containerName="oc" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.140891 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537950-wspwd" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.142880 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.142974 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.143104 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.153281 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537950-wspwd"] Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.245813 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flk79\" (UniqueName: \"kubernetes.io/projected/f011ae66-b283-4176-a114-20d35ff4065e-kube-api-access-flk79\") pod \"auto-csr-approver-29537950-wspwd\" (UID: \"f011ae66-b283-4176-a114-20d35ff4065e\") " pod="openshift-infra/auto-csr-approver-29537950-wspwd" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.348238 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flk79\" (UniqueName: \"kubernetes.io/projected/f011ae66-b283-4176-a114-20d35ff4065e-kube-api-access-flk79\") pod \"auto-csr-approver-29537950-wspwd\" (UID: \"f011ae66-b283-4176-a114-20d35ff4065e\") " pod="openshift-infra/auto-csr-approver-29537950-wspwd" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.371585 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flk79\" (UniqueName: \"kubernetes.io/projected/f011ae66-b283-4176-a114-20d35ff4065e-kube-api-access-flk79\") pod \"auto-csr-approver-29537950-wspwd\" (UID: \"f011ae66-b283-4176-a114-20d35ff4065e\") " pod="openshift-infra/auto-csr-approver-29537950-wspwd" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.484934 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537950-wspwd" Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.973325 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 11:10:00 crc kubenswrapper[4996]: I0228 11:10:00.978901 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537950-wspwd"] Feb 28 11:10:01 crc kubenswrapper[4996]: I0228 11:10:01.313183 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537950-wspwd" event={"ID":"f011ae66-b283-4176-a114-20d35ff4065e","Type":"ContainerStarted","Data":"820f42865327e1e3b11d868abd7e789af1177f372d081c219acb12328130eea7"} Feb 28 11:10:02 crc kubenswrapper[4996]: I0228 11:10:02.325507 4996 generic.go:334] "Generic (PLEG): container finished" podID="f011ae66-b283-4176-a114-20d35ff4065e" containerID="8388481515d58f45eacdc7c0db99b537c2705d20a6bf4cfddcdc41531a21afde" exitCode=0 Feb 28 11:10:02 crc kubenswrapper[4996]: I0228 11:10:02.325654 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537950-wspwd" event={"ID":"f011ae66-b283-4176-a114-20d35ff4065e","Type":"ContainerDied","Data":"8388481515d58f45eacdc7c0db99b537c2705d20a6bf4cfddcdc41531a21afde"} Feb 28 11:10:03 crc kubenswrapper[4996]: I0228 11:10:03.658145 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537950-wspwd" Feb 28 11:10:03 crc kubenswrapper[4996]: I0228 11:10:03.825551 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flk79\" (UniqueName: \"kubernetes.io/projected/f011ae66-b283-4176-a114-20d35ff4065e-kube-api-access-flk79\") pod \"f011ae66-b283-4176-a114-20d35ff4065e\" (UID: \"f011ae66-b283-4176-a114-20d35ff4065e\") " Feb 28 11:10:03 crc kubenswrapper[4996]: I0228 11:10:03.832445 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f011ae66-b283-4176-a114-20d35ff4065e-kube-api-access-flk79" (OuterVolumeSpecName: "kube-api-access-flk79") pod "f011ae66-b283-4176-a114-20d35ff4065e" (UID: "f011ae66-b283-4176-a114-20d35ff4065e"). InnerVolumeSpecName "kube-api-access-flk79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:10:03 crc kubenswrapper[4996]: I0228 11:10:03.928771 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flk79\" (UniqueName: \"kubernetes.io/projected/f011ae66-b283-4176-a114-20d35ff4065e-kube-api-access-flk79\") on node \"crc\" DevicePath \"\"" Feb 28 11:10:04 crc kubenswrapper[4996]: I0228 11:10:04.344132 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537950-wspwd" event={"ID":"f011ae66-b283-4176-a114-20d35ff4065e","Type":"ContainerDied","Data":"820f42865327e1e3b11d868abd7e789af1177f372d081c219acb12328130eea7"} Feb 28 11:10:04 crc kubenswrapper[4996]: I0228 11:10:04.344182 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820f42865327e1e3b11d868abd7e789af1177f372d081c219acb12328130eea7" Feb 28 11:10:04 crc kubenswrapper[4996]: I0228 11:10:04.344244 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537950-wspwd" Feb 28 11:10:04 crc kubenswrapper[4996]: E0228 11:10:04.521106 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf011ae66_b283_4176_a114_20d35ff4065e.slice/crio-820f42865327e1e3b11d868abd7e789af1177f372d081c219acb12328130eea7\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf011ae66_b283_4176_a114_20d35ff4065e.slice\": RecentStats: unable to find data in memory cache]" Feb 28 11:10:04 crc kubenswrapper[4996]: I0228 11:10:04.727234 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537944-scnxv"] Feb 28 11:10:04 crc kubenswrapper[4996]: I0228 11:10:04.736080 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537944-scnxv"] Feb 28 11:10:05 crc kubenswrapper[4996]: I0228 11:10:05.049933 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2067f086-a6e3-42f6-9207-48868646e826" path="/var/lib/kubelet/pods/2067f086-a6e3-42f6-9207-48868646e826/volumes" Feb 28 11:10:21 crc kubenswrapper[4996]: I0228 11:10:21.493026 4996 generic.go:334] "Generic (PLEG): container finished" podID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerID="50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d" exitCode=0 Feb 28 11:10:21 crc kubenswrapper[4996]: I0228 11:10:21.493150 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-9txfd/must-gather-9fssl" event={"ID":"15c2b360-7bd2-47c0-80ab-15cca738eb8c","Type":"ContainerDied","Data":"50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d"} Feb 28 11:10:21 crc kubenswrapper[4996]: I0228 11:10:21.494192 4996 scope.go:117] "RemoveContainer" containerID="50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d" Feb 28 11:10:22 crc kubenswrapper[4996]: I0228 11:10:22.455372 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9txfd_must-gather-9fssl_15c2b360-7bd2-47c0-80ab-15cca738eb8c/gather/0.log" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.024234 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-9txfd/must-gather-9fssl"] Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.025143 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-9txfd/must-gather-9fssl" podUID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerName="copy" containerID="cri-o://ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d" gracePeriod=2 Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.036270 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-9txfd/must-gather-9fssl"] Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.466581 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9txfd_must-gather-9fssl_15c2b360-7bd2-47c0-80ab-15cca738eb8c/copy/0.log" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.467060 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.591843 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-9txfd_must-gather-9fssl_15c2b360-7bd2-47c0-80ab-15cca738eb8c/copy/0.log" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.592559 4996 generic.go:334] "Generic (PLEG): container finished" podID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerID="ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d" exitCode=143 Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.592663 4996 scope.go:117] "RemoveContainer" containerID="ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.592699 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-9txfd/must-gather-9fssl" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.626528 4996 scope.go:117] "RemoveContainer" containerID="50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.630956 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkbq9\" (UniqueName: \"kubernetes.io/projected/15c2b360-7bd2-47c0-80ab-15cca738eb8c-kube-api-access-tkbq9\") pod \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\" (UID: \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\") " Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.631114 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15c2b360-7bd2-47c0-80ab-15cca738eb8c-must-gather-output\") pod \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\" (UID: \"15c2b360-7bd2-47c0-80ab-15cca738eb8c\") " Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.638228 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c2b360-7bd2-47c0-80ab-15cca738eb8c-kube-api-access-tkbq9" (OuterVolumeSpecName: "kube-api-access-tkbq9") pod "15c2b360-7bd2-47c0-80ab-15cca738eb8c" (UID: "15c2b360-7bd2-47c0-80ab-15cca738eb8c"). InnerVolumeSpecName "kube-api-access-tkbq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.703569 4996 scope.go:117] "RemoveContainer" containerID="ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d" Feb 28 11:10:32 crc kubenswrapper[4996]: E0228 11:10:32.704268 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d\": container with ID starting with ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d not found: ID does not exist" containerID="ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.704521 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d"} err="failed to get container status \"ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d\": rpc error: code = NotFound desc = could not find container \"ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d\": container with ID starting with ef9ea24f8b8d64ae1bbd799ce85fe9bdebc4374182d5b1060f43ab0012bd764d not found: ID does not exist" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.704719 4996 scope.go:117] "RemoveContainer" containerID="50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d" Feb 28 11:10:32 crc kubenswrapper[4996]: E0228 11:10:32.705419 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d\": container with ID starting with 50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d not found: ID does not exist" containerID="50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.705469 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d"} err="failed to get container status \"50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d\": rpc error: code = NotFound desc = could not find container \"50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d\": container with ID starting with 50a2c054e0485c6a31afbbcd1ba0ec6cd39728aba954bfdea8bcf29c76bc7a0d not found: ID does not exist" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.733665 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkbq9\" (UniqueName: \"kubernetes.io/projected/15c2b360-7bd2-47c0-80ab-15cca738eb8c-kube-api-access-tkbq9\") on node \"crc\" DevicePath \"\"" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.827305 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c2b360-7bd2-47c0-80ab-15cca738eb8c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "15c2b360-7bd2-47c0-80ab-15cca738eb8c" (UID: "15c2b360-7bd2-47c0-80ab-15cca738eb8c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:10:32 crc kubenswrapper[4996]: I0228 11:10:32.836137 4996 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/15c2b360-7bd2-47c0-80ab-15cca738eb8c-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 28 11:10:33 crc kubenswrapper[4996]: I0228 11:10:33.052062 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" path="/var/lib/kubelet/pods/15c2b360-7bd2-47c0-80ab-15cca738eb8c/volumes" Feb 28 11:10:42 crc kubenswrapper[4996]: I0228 11:10:42.248713 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:10:42 crc kubenswrapper[4996]: I0228 11:10:42.249278 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:10:51 crc kubenswrapper[4996]: I0228 11:10:51.562061 4996 scope.go:117] "RemoveContainer" containerID="408f20dfd7b990c67ccee2c2dd26b710f25f8192eec4efebba66306be9e93c02" Feb 28 11:11:12 crc kubenswrapper[4996]: I0228 11:11:12.249933 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:11:12 crc kubenswrapper[4996]: I0228 11:11:12.250624 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:11:30 crc kubenswrapper[4996]: E0228 11:11:30.034111 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:11:42 crc kubenswrapper[4996]: I0228 11:11:42.249081 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:11:42 crc kubenswrapper[4996]: I0228 11:11:42.249692 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:11:42 crc kubenswrapper[4996]: I0228 11:11:42.249756 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 11:11:42 crc kubenswrapper[4996]: I0228 11:11:42.250973 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 11:11:42 crc kubenswrapper[4996]: I0228 11:11:42.251053 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" gracePeriod=600 Feb 28 11:11:42 crc kubenswrapper[4996]: E0228 11:11:42.383307 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:11:43 crc kubenswrapper[4996]: I0228 11:11:43.239645 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" exitCode=0 Feb 28 11:11:43 crc kubenswrapper[4996]: I0228 11:11:43.239692 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa"} Feb 28 11:11:43 crc kubenswrapper[4996]: I0228 11:11:43.239767 4996 scope.go:117] "RemoveContainer" containerID="a51d8d511d6b011a2a3ec4c88be404ecf9e6fca10cd03fdce244834cf9c24eb2" Feb 28 11:11:43 crc kubenswrapper[4996]: I0228 11:11:43.240877 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:11:43 crc kubenswrapper[4996]: E0228 11:11:43.241599 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.513611 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8xlc"] Feb 28 11:11:49 crc kubenswrapper[4996]: E0228 11:11:49.514669 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f011ae66-b283-4176-a114-20d35ff4065e" containerName="oc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.514690 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="f011ae66-b283-4176-a114-20d35ff4065e" containerName="oc" Feb 28 11:11:49 crc kubenswrapper[4996]: E0228 11:11:49.514714 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerName="copy" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.514722 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerName="copy" Feb 28 11:11:49 crc kubenswrapper[4996]: E0228 11:11:49.514747 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerName="gather" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.514755 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerName="gather" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.515028 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerName="copy" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.515068 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c2b360-7bd2-47c0-80ab-15cca738eb8c" containerName="gather" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.515082 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="f011ae66-b283-4176-a114-20d35ff4065e" containerName="oc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.516770 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.533857 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8xlc"] Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.652378 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-catalog-content\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.652452 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c88lz\" (UniqueName: \"kubernetes.io/projected/ce901ae9-05d2-4916-bcbc-8f76b9390e90-kube-api-access-c88lz\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.652983 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-utilities\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.754517 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-utilities\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.754581 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-catalog-content\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.754641 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c88lz\" (UniqueName: \"kubernetes.io/projected/ce901ae9-05d2-4916-bcbc-8f76b9390e90-kube-api-access-c88lz\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.755177 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-catalog-content\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.755217 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-utilities\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.776029 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c88lz\" (UniqueName: \"kubernetes.io/projected/ce901ae9-05d2-4916-bcbc-8f76b9390e90-kube-api-access-c88lz\") pod \"community-operators-m8xlc\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:49 crc kubenswrapper[4996]: I0228 11:11:49.850400 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:50 crc kubenswrapper[4996]: I0228 11:11:50.371202 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8xlc"] Feb 28 11:11:51 crc kubenswrapper[4996]: I0228 11:11:51.310914 4996 generic.go:334] "Generic (PLEG): container finished" podID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerID="439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12" exitCode=0 Feb 28 11:11:51 crc kubenswrapper[4996]: I0228 11:11:51.311035 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8xlc" event={"ID":"ce901ae9-05d2-4916-bcbc-8f76b9390e90","Type":"ContainerDied","Data":"439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12"} Feb 28 11:11:51 crc kubenswrapper[4996]: I0228 11:11:51.311201 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8xlc" event={"ID":"ce901ae9-05d2-4916-bcbc-8f76b9390e90","Type":"ContainerStarted","Data":"7f1d719cf611a7c790b42867335a27a4a158995a159fcb0f562148ace54da10e"} Feb 28 11:11:52 crc kubenswrapper[4996]: I0228 11:11:52.324500 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8xlc" event={"ID":"ce901ae9-05d2-4916-bcbc-8f76b9390e90","Type":"ContainerStarted","Data":"106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3"} Feb 28 11:11:54 crc kubenswrapper[4996]: I0228 11:11:54.345621 4996 generic.go:334] "Generic (PLEG): container finished" podID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerID="106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3" exitCode=0 Feb 28 11:11:54 crc kubenswrapper[4996]: I0228 11:11:54.345708 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8xlc" event={"ID":"ce901ae9-05d2-4916-bcbc-8f76b9390e90","Type":"ContainerDied","Data":"106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3"} Feb 28 11:11:55 crc kubenswrapper[4996]: I0228 11:11:55.357516 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8xlc" event={"ID":"ce901ae9-05d2-4916-bcbc-8f76b9390e90","Type":"ContainerStarted","Data":"345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49"} Feb 28 11:11:55 crc kubenswrapper[4996]: I0228 11:11:55.380973 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8xlc" podStartSLOduration=2.935039151 podStartE2EDuration="6.380948018s" podCreationTimestamp="2026-02-28 11:11:49 +0000 UTC" firstStartedPulling="2026-02-28 11:11:51.313611909 +0000 UTC m=+7875.004414730" lastFinishedPulling="2026-02-28 11:11:54.759520796 +0000 UTC m=+7878.450323597" observedRunningTime="2026-02-28 11:11:55.375600968 +0000 UTC m=+7879.066403789" watchObservedRunningTime="2026-02-28 11:11:55.380948018 +0000 UTC m=+7879.071750839" Feb 28 11:11:57 crc kubenswrapper[4996]: I0228 11:11:57.039135 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:11:57 crc kubenswrapper[4996]: E0228 11:11:57.039832 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:11:59 crc kubenswrapper[4996]: I0228 11:11:59.851146 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:59 crc kubenswrapper[4996]: I0228 11:11:59.851439 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:11:59 crc kubenswrapper[4996]: I0228 11:11:59.895889 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.152749 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537952-whtlq"] Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.155707 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537952-whtlq" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.159438 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.159722 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.159904 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.165531 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537952-whtlq"] Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.270604 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4dg\" (UniqueName: \"kubernetes.io/projected/0929b45f-bb4f-4cb4-b3de-a7e3406f7d44-kube-api-access-lt4dg\") pod \"auto-csr-approver-29537952-whtlq\" (UID: \"0929b45f-bb4f-4cb4-b3de-a7e3406f7d44\") " pod="openshift-infra/auto-csr-approver-29537952-whtlq" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.372308 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lt4dg\" (UniqueName: \"kubernetes.io/projected/0929b45f-bb4f-4cb4-b3de-a7e3406f7d44-kube-api-access-lt4dg\") pod \"auto-csr-approver-29537952-whtlq\" (UID: \"0929b45f-bb4f-4cb4-b3de-a7e3406f7d44\") " pod="openshift-infra/auto-csr-approver-29537952-whtlq" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.397605 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lt4dg\" (UniqueName: \"kubernetes.io/projected/0929b45f-bb4f-4cb4-b3de-a7e3406f7d44-kube-api-access-lt4dg\") pod \"auto-csr-approver-29537952-whtlq\" (UID: \"0929b45f-bb4f-4cb4-b3de-a7e3406f7d44\") " pod="openshift-infra/auto-csr-approver-29537952-whtlq" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.452409 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.496340 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8xlc"] Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.500902 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537952-whtlq" Feb 28 11:12:00 crc kubenswrapper[4996]: I0228 11:12:00.945028 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537952-whtlq"] Feb 28 11:12:01 crc kubenswrapper[4996]: I0228 11:12:01.414214 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537952-whtlq" event={"ID":"0929b45f-bb4f-4cb4-b3de-a7e3406f7d44","Type":"ContainerStarted","Data":"6bff152f8541ae99fe299de40bb825945b381f99985fdc6fc7b6826764440d86"} Feb 28 11:12:02 crc kubenswrapper[4996]: I0228 11:12:02.431394 4996 generic.go:334] "Generic (PLEG): container finished" podID="0929b45f-bb4f-4cb4-b3de-a7e3406f7d44" containerID="cb2cfe77cb549cd388df41ecc3d4aa41d01fd7f62de02c5f98f91f8009c89d61" exitCode=0 Feb 28 11:12:02 crc kubenswrapper[4996]: I0228 11:12:02.431624 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537952-whtlq" event={"ID":"0929b45f-bb4f-4cb4-b3de-a7e3406f7d44","Type":"ContainerDied","Data":"cb2cfe77cb549cd388df41ecc3d4aa41d01fd7f62de02c5f98f91f8009c89d61"} Feb 28 11:12:02 crc kubenswrapper[4996]: I0228 11:12:02.432369 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8xlc" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerName="registry-server" containerID="cri-o://345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49" gracePeriod=2 Feb 28 11:12:02 crc kubenswrapper[4996]: I0228 11:12:02.890305 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.046452 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-utilities\") pod \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.049531 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c88lz\" (UniqueName: \"kubernetes.io/projected/ce901ae9-05d2-4916-bcbc-8f76b9390e90-kube-api-access-c88lz\") pod \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.049442 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-utilities" (OuterVolumeSpecName: "utilities") pod "ce901ae9-05d2-4916-bcbc-8f76b9390e90" (UID: "ce901ae9-05d2-4916-bcbc-8f76b9390e90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.050832 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-catalog-content\") pod \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\" (UID: \"ce901ae9-05d2-4916-bcbc-8f76b9390e90\") " Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.051871 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.055432 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce901ae9-05d2-4916-bcbc-8f76b9390e90-kube-api-access-c88lz" (OuterVolumeSpecName: "kube-api-access-c88lz") pod "ce901ae9-05d2-4916-bcbc-8f76b9390e90" (UID: "ce901ae9-05d2-4916-bcbc-8f76b9390e90"). InnerVolumeSpecName "kube-api-access-c88lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.137042 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce901ae9-05d2-4916-bcbc-8f76b9390e90" (UID: "ce901ae9-05d2-4916-bcbc-8f76b9390e90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.155212 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce901ae9-05d2-4916-bcbc-8f76b9390e90-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.155264 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c88lz\" (UniqueName: \"kubernetes.io/projected/ce901ae9-05d2-4916-bcbc-8f76b9390e90-kube-api-access-c88lz\") on node \"crc\" DevicePath \"\"" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.445772 4996 generic.go:334] "Generic (PLEG): container finished" podID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerID="345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49" exitCode=0 Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.445848 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8xlc" event={"ID":"ce901ae9-05d2-4916-bcbc-8f76b9390e90","Type":"ContainerDied","Data":"345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49"} Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.446926 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8xlc" event={"ID":"ce901ae9-05d2-4916-bcbc-8f76b9390e90","Type":"ContainerDied","Data":"7f1d719cf611a7c790b42867335a27a4a158995a159fcb0f562148ace54da10e"} Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.446976 4996 scope.go:117] "RemoveContainer" containerID="345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.445899 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8xlc" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.492649 4996 scope.go:117] "RemoveContainer" containerID="106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.494638 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8xlc"] Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.506292 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8xlc"] Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.519282 4996 scope.go:117] "RemoveContainer" containerID="439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.571939 4996 scope.go:117] "RemoveContainer" containerID="345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49" Feb 28 11:12:03 crc kubenswrapper[4996]: E0228 11:12:03.574495 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49\": container with ID starting with 345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49 not found: ID does not exist" containerID="345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.574535 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49"} err="failed to get container status \"345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49\": rpc error: code = NotFound desc = could not find container \"345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49\": container with ID starting with 345763b1025c8f855809260e167be2c8bc7bb49b3121c8251450a293f60d6c49 not found: ID does not exist" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.574560 4996 scope.go:117] "RemoveContainer" containerID="106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3" Feb 28 11:12:03 crc kubenswrapper[4996]: E0228 11:12:03.580399 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3\": container with ID starting with 106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3 not found: ID does not exist" containerID="106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.580461 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3"} err="failed to get container status \"106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3\": rpc error: code = NotFound desc = could not find container \"106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3\": container with ID starting with 106b2a0643e29420d3152f2ea93938177fc49e9433eab21ddfdbb1b9e7f3d7a3 not found: ID does not exist" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.580485 4996 scope.go:117] "RemoveContainer" containerID="439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12" Feb 28 11:12:03 crc kubenswrapper[4996]: E0228 11:12:03.581081 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12\": container with ID starting with 439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12 not found: ID does not exist" containerID="439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.581111 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12"} err="failed to get container status \"439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12\": rpc error: code = NotFound desc = could not find container \"439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12\": container with ID starting with 439346d63f5fada76a6588c07112c254fd8d6b8696dcf31c511ab438be7fbb12 not found: ID does not exist" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.743973 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537952-whtlq" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.864539 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lt4dg\" (UniqueName: \"kubernetes.io/projected/0929b45f-bb4f-4cb4-b3de-a7e3406f7d44-kube-api-access-lt4dg\") pod \"0929b45f-bb4f-4cb4-b3de-a7e3406f7d44\" (UID: \"0929b45f-bb4f-4cb4-b3de-a7e3406f7d44\") " Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.873152 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0929b45f-bb4f-4cb4-b3de-a7e3406f7d44-kube-api-access-lt4dg" (OuterVolumeSpecName: "kube-api-access-lt4dg") pod "0929b45f-bb4f-4cb4-b3de-a7e3406f7d44" (UID: "0929b45f-bb4f-4cb4-b3de-a7e3406f7d44"). InnerVolumeSpecName "kube-api-access-lt4dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:12:03 crc kubenswrapper[4996]: I0228 11:12:03.966893 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lt4dg\" (UniqueName: \"kubernetes.io/projected/0929b45f-bb4f-4cb4-b3de-a7e3406f7d44-kube-api-access-lt4dg\") on node \"crc\" DevicePath \"\"" Feb 28 11:12:04 crc kubenswrapper[4996]: I0228 11:12:04.458580 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537952-whtlq" event={"ID":"0929b45f-bb4f-4cb4-b3de-a7e3406f7d44","Type":"ContainerDied","Data":"6bff152f8541ae99fe299de40bb825945b381f99985fdc6fc7b6826764440d86"} Feb 28 11:12:04 crc kubenswrapper[4996]: I0228 11:12:04.458914 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bff152f8541ae99fe299de40bb825945b381f99985fdc6fc7b6826764440d86" Feb 28 11:12:04 crc kubenswrapper[4996]: I0228 11:12:04.458610 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537952-whtlq" Feb 28 11:12:04 crc kubenswrapper[4996]: I0228 11:12:04.817558 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537946-8l8sc"] Feb 28 11:12:04 crc kubenswrapper[4996]: I0228 11:12:04.826545 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537946-8l8sc"] Feb 28 11:12:05 crc kubenswrapper[4996]: I0228 11:12:05.042497 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca61316d-5a0f-4fd2-8705-be0d1c6c4617" path="/var/lib/kubelet/pods/ca61316d-5a0f-4fd2-8705-be0d1c6c4617/volumes" Feb 28 11:12:05 crc kubenswrapper[4996]: I0228 11:12:05.043202 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" path="/var/lib/kubelet/pods/ce901ae9-05d2-4916-bcbc-8f76b9390e90/volumes" Feb 28 11:12:12 crc kubenswrapper[4996]: I0228 11:12:12.032735 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:12:12 crc kubenswrapper[4996]: E0228 11:12:12.033640 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:12:23 crc kubenswrapper[4996]: I0228 11:12:23.033233 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:12:23 crc kubenswrapper[4996]: E0228 11:12:23.034089 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.152099 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fzxkw"] Feb 28 11:12:35 crc kubenswrapper[4996]: E0228 11:12:35.153403 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerName="extract-utilities" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.153442 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerName="extract-utilities" Feb 28 11:12:35 crc kubenswrapper[4996]: E0228 11:12:35.153519 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerName="registry-server" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.153530 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerName="registry-server" Feb 28 11:12:35 crc kubenswrapper[4996]: E0228 11:12:35.153573 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerName="extract-content" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.153582 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerName="extract-content" Feb 28 11:12:35 crc kubenswrapper[4996]: E0228 11:12:35.153634 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0929b45f-bb4f-4cb4-b3de-a7e3406f7d44" containerName="oc" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.153644 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="0929b45f-bb4f-4cb4-b3de-a7e3406f7d44" containerName="oc" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.154263 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce901ae9-05d2-4916-bcbc-8f76b9390e90" containerName="registry-server" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.154330 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="0929b45f-bb4f-4cb4-b3de-a7e3406f7d44" containerName="oc" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.157298 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.179561 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzxkw"] Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.356297 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-catalog-content\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.356721 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktnrg\" (UniqueName: \"kubernetes.io/projected/deb30627-3098-41ce-ae0d-5d92153bd1e8-kube-api-access-ktnrg\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.356841 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-utilities\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.459083 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-catalog-content\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.459993 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktnrg\" (UniqueName: \"kubernetes.io/projected/deb30627-3098-41ce-ae0d-5d92153bd1e8-kube-api-access-ktnrg\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.460413 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-utilities\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.459774 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-catalog-content\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.460803 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-utilities\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.497426 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktnrg\" (UniqueName: \"kubernetes.io/projected/deb30627-3098-41ce-ae0d-5d92153bd1e8-kube-api-access-ktnrg\") pod \"certified-operators-fzxkw\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:35 crc kubenswrapper[4996]: I0228 11:12:35.782654 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:36 crc kubenswrapper[4996]: I0228 11:12:36.281355 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fzxkw"] Feb 28 11:12:36 crc kubenswrapper[4996]: I0228 11:12:36.759102 4996 generic.go:334] "Generic (PLEG): container finished" podID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerID="c36927f3268c8df2a2c9efd68cf8340f6eaa03bcf30642c3e4a3c27d747493da" exitCode=0 Feb 28 11:12:36 crc kubenswrapper[4996]: I0228 11:12:36.759153 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzxkw" event={"ID":"deb30627-3098-41ce-ae0d-5d92153bd1e8","Type":"ContainerDied","Data":"c36927f3268c8df2a2c9efd68cf8340f6eaa03bcf30642c3e4a3c27d747493da"} Feb 28 11:12:36 crc kubenswrapper[4996]: I0228 11:12:36.759369 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzxkw" event={"ID":"deb30627-3098-41ce-ae0d-5d92153bd1e8","Type":"ContainerStarted","Data":"182b88f468893fecc10baed437306197fb172d95675e2afc0742068b7e49b667"} Feb 28 11:12:37 crc kubenswrapper[4996]: I0228 11:12:37.771478 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzxkw" event={"ID":"deb30627-3098-41ce-ae0d-5d92153bd1e8","Type":"ContainerStarted","Data":"6ea7c568010f27ab37b40f8a9b8fe786ce9439d776425f296bf10fadc407ba40"} Feb 28 11:12:38 crc kubenswrapper[4996]: I0228 11:12:38.032718 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:12:38 crc kubenswrapper[4996]: E0228 11:12:38.032979 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:12:39 crc kubenswrapper[4996]: E0228 11:12:39.033214 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:12:39 crc kubenswrapper[4996]: I0228 11:12:39.790855 4996 generic.go:334] "Generic (PLEG): container finished" podID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerID="6ea7c568010f27ab37b40f8a9b8fe786ce9439d776425f296bf10fadc407ba40" exitCode=0 Feb 28 11:12:39 crc kubenswrapper[4996]: I0228 11:12:39.790929 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzxkw" event={"ID":"deb30627-3098-41ce-ae0d-5d92153bd1e8","Type":"ContainerDied","Data":"6ea7c568010f27ab37b40f8a9b8fe786ce9439d776425f296bf10fadc407ba40"} Feb 28 11:12:40 crc kubenswrapper[4996]: I0228 11:12:40.801491 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzxkw" event={"ID":"deb30627-3098-41ce-ae0d-5d92153bd1e8","Type":"ContainerStarted","Data":"841a1acf6bd2a325daa1735c629ed026b0ee766f139d20a7d56bda06a1ed0d37"} Feb 28 11:12:40 crc kubenswrapper[4996]: I0228 11:12:40.831080 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fzxkw" podStartSLOduration=2.406858944 podStartE2EDuration="5.831061932s" podCreationTimestamp="2026-02-28 11:12:35 +0000 UTC" firstStartedPulling="2026-02-28 11:12:36.76114862 +0000 UTC m=+7920.451951471" lastFinishedPulling="2026-02-28 11:12:40.185351628 +0000 UTC m=+7923.876154459" observedRunningTime="2026-02-28 11:12:40.826318587 +0000 UTC m=+7924.517121408" watchObservedRunningTime="2026-02-28 11:12:40.831061932 +0000 UTC m=+7924.521864733" Feb 28 11:12:45 crc kubenswrapper[4996]: I0228 11:12:45.782846 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:45 crc kubenswrapper[4996]: I0228 11:12:45.783409 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:45 crc kubenswrapper[4996]: I0228 11:12:45.856444 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:45 crc kubenswrapper[4996]: I0228 11:12:45.918027 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:46 crc kubenswrapper[4996]: I0228 11:12:46.101304 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fzxkw"] Feb 28 11:12:47 crc kubenswrapper[4996]: I0228 11:12:47.867834 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fzxkw" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerName="registry-server" containerID="cri-o://841a1acf6bd2a325daa1735c629ed026b0ee766f139d20a7d56bda06a1ed0d37" gracePeriod=2 Feb 28 11:12:48 crc kubenswrapper[4996]: I0228 11:12:48.881280 4996 generic.go:334] "Generic (PLEG): container finished" podID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerID="841a1acf6bd2a325daa1735c629ed026b0ee766f139d20a7d56bda06a1ed0d37" exitCode=0 Feb 28 11:12:48 crc kubenswrapper[4996]: I0228 11:12:48.881635 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzxkw" event={"ID":"deb30627-3098-41ce-ae0d-5d92153bd1e8","Type":"ContainerDied","Data":"841a1acf6bd2a325daa1735c629ed026b0ee766f139d20a7d56bda06a1ed0d37"} Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.403209 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.467401 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktnrg\" (UniqueName: \"kubernetes.io/projected/deb30627-3098-41ce-ae0d-5d92153bd1e8-kube-api-access-ktnrg\") pod \"deb30627-3098-41ce-ae0d-5d92153bd1e8\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.467539 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-utilities\") pod \"deb30627-3098-41ce-ae0d-5d92153bd1e8\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.467586 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-catalog-content\") pod \"deb30627-3098-41ce-ae0d-5d92153bd1e8\" (UID: \"deb30627-3098-41ce-ae0d-5d92153bd1e8\") " Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.468949 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-utilities" (OuterVolumeSpecName: "utilities") pod "deb30627-3098-41ce-ae0d-5d92153bd1e8" (UID: "deb30627-3098-41ce-ae0d-5d92153bd1e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.473356 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb30627-3098-41ce-ae0d-5d92153bd1e8-kube-api-access-ktnrg" (OuterVolumeSpecName: "kube-api-access-ktnrg") pod "deb30627-3098-41ce-ae0d-5d92153bd1e8" (UID: "deb30627-3098-41ce-ae0d-5d92153bd1e8"). InnerVolumeSpecName "kube-api-access-ktnrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.538504 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "deb30627-3098-41ce-ae0d-5d92153bd1e8" (UID: "deb30627-3098-41ce-ae0d-5d92153bd1e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.569424 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktnrg\" (UniqueName: \"kubernetes.io/projected/deb30627-3098-41ce-ae0d-5d92153bd1e8-kube-api-access-ktnrg\") on node \"crc\" DevicePath \"\"" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.569545 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.569556 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb30627-3098-41ce-ae0d-5d92153bd1e8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.891537 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fzxkw" event={"ID":"deb30627-3098-41ce-ae0d-5d92153bd1e8","Type":"ContainerDied","Data":"182b88f468893fecc10baed437306197fb172d95675e2afc0742068b7e49b667"} Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.892542 4996 scope.go:117] "RemoveContainer" containerID="841a1acf6bd2a325daa1735c629ed026b0ee766f139d20a7d56bda06a1ed0d37" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.891575 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fzxkw" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.921612 4996 scope.go:117] "RemoveContainer" containerID="6ea7c568010f27ab37b40f8a9b8fe786ce9439d776425f296bf10fadc407ba40" Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.931978 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fzxkw"] Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.939507 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fzxkw"] Feb 28 11:12:49 crc kubenswrapper[4996]: I0228 11:12:49.952282 4996 scope.go:117] "RemoveContainer" containerID="c36927f3268c8df2a2c9efd68cf8340f6eaa03bcf30642c3e4a3c27d747493da" Feb 28 11:12:51 crc kubenswrapper[4996]: I0228 11:12:51.043581 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" path="/var/lib/kubelet/pods/deb30627-3098-41ce-ae0d-5d92153bd1e8/volumes" Feb 28 11:12:51 crc kubenswrapper[4996]: I0228 11:12:51.671687 4996 scope.go:117] "RemoveContainer" containerID="0278c260e74182027bb6ca62196b0e80491c541f36d4aad1e86355731123dc8a" Feb 28 11:12:52 crc kubenswrapper[4996]: I0228 11:12:52.035989 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:12:52 crc kubenswrapper[4996]: E0228 11:12:52.036585 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:13:07 crc kubenswrapper[4996]: I0228 11:13:07.039128 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:13:07 crc kubenswrapper[4996]: E0228 11:13:07.039934 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:13:21 crc kubenswrapper[4996]: I0228 11:13:21.034613 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:13:21 crc kubenswrapper[4996]: E0228 11:13:21.036333 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:13:36 crc kubenswrapper[4996]: I0228 11:13:36.033023 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:13:36 crc kubenswrapper[4996]: E0228 11:13:36.033652 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.033840 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:13:48 crc kubenswrapper[4996]: E0228 11:13:48.034715 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.488793 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mlcgh/must-gather-h59gw"] Feb 28 11:13:48 crc kubenswrapper[4996]: E0228 11:13:48.489640 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerName="extract-utilities" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.489663 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerName="extract-utilities" Feb 28 11:13:48 crc kubenswrapper[4996]: E0228 11:13:48.489690 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerName="extract-content" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.489697 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerName="extract-content" Feb 28 11:13:48 crc kubenswrapper[4996]: E0228 11:13:48.489721 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerName="registry-server" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.489727 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerName="registry-server" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.489901 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb30627-3098-41ce-ae0d-5d92153bd1e8" containerName="registry-server" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.491155 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.497383 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mlcgh"/"kube-root-ca.crt" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.498096 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mlcgh"/"openshift-service-ca.crt" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.513188 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mlcgh/must-gather-h59gw"] Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.670846 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qvvf\" (UniqueName: \"kubernetes.io/projected/b4414458-598b-4b79-b4fd-2b70f867c529-kube-api-access-5qvvf\") pod \"must-gather-h59gw\" (UID: \"b4414458-598b-4b79-b4fd-2b70f867c529\") " pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.670906 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b4414458-598b-4b79-b4fd-2b70f867c529-must-gather-output\") pod \"must-gather-h59gw\" (UID: \"b4414458-598b-4b79-b4fd-2b70f867c529\") " pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.773622 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qvvf\" (UniqueName: \"kubernetes.io/projected/b4414458-598b-4b79-b4fd-2b70f867c529-kube-api-access-5qvvf\") pod \"must-gather-h59gw\" (UID: \"b4414458-598b-4b79-b4fd-2b70f867c529\") " pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.773699 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b4414458-598b-4b79-b4fd-2b70f867c529-must-gather-output\") pod \"must-gather-h59gw\" (UID: \"b4414458-598b-4b79-b4fd-2b70f867c529\") " pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.774331 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b4414458-598b-4b79-b4fd-2b70f867c529-must-gather-output\") pod \"must-gather-h59gw\" (UID: \"b4414458-598b-4b79-b4fd-2b70f867c529\") " pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.794123 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qvvf\" (UniqueName: \"kubernetes.io/projected/b4414458-598b-4b79-b4fd-2b70f867c529-kube-api-access-5qvvf\") pod \"must-gather-h59gw\" (UID: \"b4414458-598b-4b79-b4fd-2b70f867c529\") " pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:13:48 crc kubenswrapper[4996]: I0228 11:13:48.818431 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:13:49 crc kubenswrapper[4996]: I0228 11:13:49.291998 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mlcgh/must-gather-h59gw"] Feb 28 11:13:49 crc kubenswrapper[4996]: I0228 11:13:49.468572 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/must-gather-h59gw" event={"ID":"b4414458-598b-4b79-b4fd-2b70f867c529","Type":"ContainerStarted","Data":"6db9f9fcd326d544af059f9e72d4df8292492173347b28855925c66f8ce28d91"} Feb 28 11:13:50 crc kubenswrapper[4996]: I0228 11:13:50.480935 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/must-gather-h59gw" event={"ID":"b4414458-598b-4b79-b4fd-2b70f867c529","Type":"ContainerStarted","Data":"200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d"} Feb 28 11:13:50 crc kubenswrapper[4996]: I0228 11:13:50.481503 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/must-gather-h59gw" event={"ID":"b4414458-598b-4b79-b4fd-2b70f867c529","Type":"ContainerStarted","Data":"0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b"} Feb 28 11:13:52 crc kubenswrapper[4996]: E0228 11:13:52.033261 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.550709 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mlcgh/must-gather-h59gw" podStartSLOduration=5.55068899 podStartE2EDuration="5.55068899s" podCreationTimestamp="2026-02-28 11:13:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 11:13:50.506398822 +0000 UTC m=+7994.197201633" watchObservedRunningTime="2026-02-28 11:13:53.55068899 +0000 UTC m=+7997.241491811" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.559502 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-n4qpq"] Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.561074 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.570523 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mlcgh"/"default-dockercfg-4n92s" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.677617 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-host\") pod \"crc-debug-n4qpq\" (UID: \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\") " pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.678065 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmn4\" (UniqueName: \"kubernetes.io/projected/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-kube-api-access-vcmn4\") pod \"crc-debug-n4qpq\" (UID: \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\") " pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.779747 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmn4\" (UniqueName: \"kubernetes.io/projected/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-kube-api-access-vcmn4\") pod \"crc-debug-n4qpq\" (UID: \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\") " pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.779869 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-host\") pod \"crc-debug-n4qpq\" (UID: \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\") " pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.779976 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-host\") pod \"crc-debug-n4qpq\" (UID: \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\") " pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.798712 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmn4\" (UniqueName: \"kubernetes.io/projected/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-kube-api-access-vcmn4\") pod \"crc-debug-n4qpq\" (UID: \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\") " pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:13:53 crc kubenswrapper[4996]: I0228 11:13:53.885549 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:13:53 crc kubenswrapper[4996]: W0228 11:13:53.930435 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae5e6a3f_3b6c_440a_867f_afcdadf54cf0.slice/crio-d82f5140c5e5fb24991638afc9c968e5a1b4c494abc2a31d8b14690653cc512d WatchSource:0}: Error finding container d82f5140c5e5fb24991638afc9c968e5a1b4c494abc2a31d8b14690653cc512d: Status 404 returned error can't find the container with id d82f5140c5e5fb24991638afc9c968e5a1b4c494abc2a31d8b14690653cc512d Feb 28 11:13:54 crc kubenswrapper[4996]: I0228 11:13:54.539717 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" event={"ID":"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0","Type":"ContainerStarted","Data":"c790e6036ced609a1ebf1c65fc897079e1b08806dc868cc093e98ed9edbc7f6e"} Feb 28 11:13:54 crc kubenswrapper[4996]: I0228 11:13:54.540179 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" event={"ID":"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0","Type":"ContainerStarted","Data":"d82f5140c5e5fb24991638afc9c968e5a1b4c494abc2a31d8b14690653cc512d"} Feb 28 11:13:54 crc kubenswrapper[4996]: I0228 11:13:54.556215 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" podStartSLOduration=1.556197492 podStartE2EDuration="1.556197492s" podCreationTimestamp="2026-02-28 11:13:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 11:13:54.551498337 +0000 UTC m=+7998.242301168" watchObservedRunningTime="2026-02-28 11:13:54.556197492 +0000 UTC m=+7998.247000303" Feb 28 11:13:59 crc kubenswrapper[4996]: I0228 11:13:59.033780 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:13:59 crc kubenswrapper[4996]: E0228 11:13:59.034504 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.154237 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537954-2j8mh"] Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.156133 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.158377 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.158621 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.158775 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.167389 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537954-2j8mh"] Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.307360 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxdmz\" (UniqueName: \"kubernetes.io/projected/edf9c214-b1a6-4fbc-91b8-c77ceb911198-kube-api-access-dxdmz\") pod \"auto-csr-approver-29537954-2j8mh\" (UID: \"edf9c214-b1a6-4fbc-91b8-c77ceb911198\") " pod="openshift-infra/auto-csr-approver-29537954-2j8mh" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.409117 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxdmz\" (UniqueName: \"kubernetes.io/projected/edf9c214-b1a6-4fbc-91b8-c77ceb911198-kube-api-access-dxdmz\") pod \"auto-csr-approver-29537954-2j8mh\" (UID: \"edf9c214-b1a6-4fbc-91b8-c77ceb911198\") " pod="openshift-infra/auto-csr-approver-29537954-2j8mh" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.433227 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxdmz\" (UniqueName: \"kubernetes.io/projected/edf9c214-b1a6-4fbc-91b8-c77ceb911198-kube-api-access-dxdmz\") pod \"auto-csr-approver-29537954-2j8mh\" (UID: \"edf9c214-b1a6-4fbc-91b8-c77ceb911198\") " pod="openshift-infra/auto-csr-approver-29537954-2j8mh" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.479738 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" Feb 28 11:14:00 crc kubenswrapper[4996]: I0228 11:14:00.979376 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537954-2j8mh"] Feb 28 11:14:01 crc kubenswrapper[4996]: I0228 11:14:01.612403 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" event={"ID":"edf9c214-b1a6-4fbc-91b8-c77ceb911198","Type":"ContainerStarted","Data":"7ffe5e0ae667a1f046f6c66f3e954f19de15c47ef4d421a04a11194339ffff9e"} Feb 28 11:14:03 crc kubenswrapper[4996]: I0228 11:14:03.630977 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" event={"ID":"edf9c214-b1a6-4fbc-91b8-c77ceb911198","Type":"ContainerStarted","Data":"44664b5be46a2951b286597fd0c9fb5632ca1d287fd7857a12447124f58bd86c"} Feb 28 11:14:03 crc kubenswrapper[4996]: I0228 11:14:03.645419 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" podStartSLOduration=2.761308589 podStartE2EDuration="3.64539683s" podCreationTimestamp="2026-02-28 11:14:00 +0000 UTC" firstStartedPulling="2026-02-28 11:14:00.981218846 +0000 UTC m=+8004.672021657" lastFinishedPulling="2026-02-28 11:14:01.865307087 +0000 UTC m=+8005.556109898" observedRunningTime="2026-02-28 11:14:03.643460003 +0000 UTC m=+8007.334262824" watchObservedRunningTime="2026-02-28 11:14:03.64539683 +0000 UTC m=+8007.336199641" Feb 28 11:14:04 crc kubenswrapper[4996]: I0228 11:14:04.640362 4996 generic.go:334] "Generic (PLEG): container finished" podID="edf9c214-b1a6-4fbc-91b8-c77ceb911198" containerID="44664b5be46a2951b286597fd0c9fb5632ca1d287fd7857a12447124f58bd86c" exitCode=0 Feb 28 11:14:04 crc kubenswrapper[4996]: I0228 11:14:04.640458 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" event={"ID":"edf9c214-b1a6-4fbc-91b8-c77ceb911198","Type":"ContainerDied","Data":"44664b5be46a2951b286597fd0c9fb5632ca1d287fd7857a12447124f58bd86c"} Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.095160 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.230355 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxdmz\" (UniqueName: \"kubernetes.io/projected/edf9c214-b1a6-4fbc-91b8-c77ceb911198-kube-api-access-dxdmz\") pod \"edf9c214-b1a6-4fbc-91b8-c77ceb911198\" (UID: \"edf9c214-b1a6-4fbc-91b8-c77ceb911198\") " Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.237465 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf9c214-b1a6-4fbc-91b8-c77ceb911198-kube-api-access-dxdmz" (OuterVolumeSpecName: "kube-api-access-dxdmz") pod "edf9c214-b1a6-4fbc-91b8-c77ceb911198" (UID: "edf9c214-b1a6-4fbc-91b8-c77ceb911198"). InnerVolumeSpecName "kube-api-access-dxdmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.333360 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxdmz\" (UniqueName: \"kubernetes.io/projected/edf9c214-b1a6-4fbc-91b8-c77ceb911198-kube-api-access-dxdmz\") on node \"crc\" DevicePath \"\"" Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.658318 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" event={"ID":"edf9c214-b1a6-4fbc-91b8-c77ceb911198","Type":"ContainerDied","Data":"7ffe5e0ae667a1f046f6c66f3e954f19de15c47ef4d421a04a11194339ffff9e"} Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.658359 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ffe5e0ae667a1f046f6c66f3e954f19de15c47ef4d421a04a11194339ffff9e" Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.658361 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537954-2j8mh" Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.726463 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537948-b8qbz"] Feb 28 11:14:06 crc kubenswrapper[4996]: I0228 11:14:06.737153 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537948-b8qbz"] Feb 28 11:14:07 crc kubenswrapper[4996]: I0228 11:14:07.046047 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78877f16-1062-4eaf-8166-9aafef3b38b5" path="/var/lib/kubelet/pods/78877f16-1062-4eaf-8166-9aafef3b38b5/volumes" Feb 28 11:14:13 crc kubenswrapper[4996]: I0228 11:14:13.032946 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:14:13 crc kubenswrapper[4996]: E0228 11:14:13.033554 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:14:27 crc kubenswrapper[4996]: I0228 11:14:27.040895 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:14:27 crc kubenswrapper[4996]: E0228 11:14:27.041793 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:14:35 crc kubenswrapper[4996]: I0228 11:14:35.912150 4996 generic.go:334] "Generic (PLEG): container finished" podID="ae5e6a3f-3b6c-440a-867f-afcdadf54cf0" containerID="c790e6036ced609a1ebf1c65fc897079e1b08806dc868cc093e98ed9edbc7f6e" exitCode=0 Feb 28 11:14:35 crc kubenswrapper[4996]: I0228 11:14:35.912671 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" event={"ID":"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0","Type":"ContainerDied","Data":"c790e6036ced609a1ebf1c65fc897079e1b08806dc868cc093e98ed9edbc7f6e"} Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.019723 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.044648 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-host\") pod \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\" (UID: \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\") " Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.044719 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmn4\" (UniqueName: \"kubernetes.io/projected/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-kube-api-access-vcmn4\") pod \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\" (UID: \"ae5e6a3f-3b6c-440a-867f-afcdadf54cf0\") " Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.044811 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-host" (OuterVolumeSpecName: "host") pod "ae5e6a3f-3b6c-440a-867f-afcdadf54cf0" (UID: "ae5e6a3f-3b6c-440a-867f-afcdadf54cf0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.048989 4996 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-host\") on node \"crc\" DevicePath \"\"" Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.056075 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-kube-api-access-vcmn4" (OuterVolumeSpecName: "kube-api-access-vcmn4") pod "ae5e6a3f-3b6c-440a-867f-afcdadf54cf0" (UID: "ae5e6a3f-3b6c-440a-867f-afcdadf54cf0"). InnerVolumeSpecName "kube-api-access-vcmn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.069786 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-n4qpq"] Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.079032 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-n4qpq"] Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.150641 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcmn4\" (UniqueName: \"kubernetes.io/projected/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0-kube-api-access-vcmn4\") on node \"crc\" DevicePath \"\"" Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.930084 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d82f5140c5e5fb24991638afc9c968e5a1b4c494abc2a31d8b14690653cc512d" Feb 28 11:14:37 crc kubenswrapper[4996]: I0228 11:14:37.930159 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-n4qpq" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.273916 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-mdjs6"] Feb 28 11:14:38 crc kubenswrapper[4996]: E0228 11:14:38.274447 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf9c214-b1a6-4fbc-91b8-c77ceb911198" containerName="oc" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.274463 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf9c214-b1a6-4fbc-91b8-c77ceb911198" containerName="oc" Feb 28 11:14:38 crc kubenswrapper[4996]: E0228 11:14:38.274483 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae5e6a3f-3b6c-440a-867f-afcdadf54cf0" containerName="container-00" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.274491 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae5e6a3f-3b6c-440a-867f-afcdadf54cf0" containerName="container-00" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.274750 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae5e6a3f-3b6c-440a-867f-afcdadf54cf0" containerName="container-00" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.275099 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf9c214-b1a6-4fbc-91b8-c77ceb911198" containerName="oc" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.275901 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.278843 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mlcgh"/"default-dockercfg-4n92s" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.375679 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-host\") pod \"crc-debug-mdjs6\" (UID: \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\") " pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.375894 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjc88\" (UniqueName: \"kubernetes.io/projected/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-kube-api-access-sjc88\") pod \"crc-debug-mdjs6\" (UID: \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\") " pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.476962 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjc88\" (UniqueName: \"kubernetes.io/projected/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-kube-api-access-sjc88\") pod \"crc-debug-mdjs6\" (UID: \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\") " pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.477060 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-host\") pod \"crc-debug-mdjs6\" (UID: \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\") " pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.477199 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-host\") pod \"crc-debug-mdjs6\" (UID: \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\") " pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.496378 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjc88\" (UniqueName: \"kubernetes.io/projected/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-kube-api-access-sjc88\") pod \"crc-debug-mdjs6\" (UID: \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\") " pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.593836 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:38 crc kubenswrapper[4996]: W0228 11:14:38.644323 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa50c8ae_7107_4c4a_b4af_9e21d9a601e6.slice/crio-cf9b4954805920bc64f8d406d6c2993c9c41021cc9d6f7205042e4eae58e2a4d WatchSource:0}: Error finding container cf9b4954805920bc64f8d406d6c2993c9c41021cc9d6f7205042e4eae58e2a4d: Status 404 returned error can't find the container with id cf9b4954805920bc64f8d406d6c2993c9c41021cc9d6f7205042e4eae58e2a4d Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.947766 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" event={"ID":"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6","Type":"ContainerStarted","Data":"974e3793e5070c6952a1f07899162fcaf1f06cc3918667236d94bb3cc21ea53e"} Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.948078 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" event={"ID":"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6","Type":"ContainerStarted","Data":"cf9b4954805920bc64f8d406d6c2993c9c41021cc9d6f7205042e4eae58e2a4d"} Feb 28 11:14:38 crc kubenswrapper[4996]: I0228 11:14:38.972046 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" podStartSLOduration=0.972024591 podStartE2EDuration="972.024591ms" podCreationTimestamp="2026-02-28 11:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 11:14:38.960968391 +0000 UTC m=+8042.651771192" watchObservedRunningTime="2026-02-28 11:14:38.972024591 +0000 UTC m=+8042.662827412" Feb 28 11:14:39 crc kubenswrapper[4996]: I0228 11:14:39.068581 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5e6a3f-3b6c-440a-867f-afcdadf54cf0" path="/var/lib/kubelet/pods/ae5e6a3f-3b6c-440a-867f-afcdadf54cf0/volumes" Feb 28 11:14:39 crc kubenswrapper[4996]: I0228 11:14:39.964757 4996 generic.go:334] "Generic (PLEG): container finished" podID="fa50c8ae-7107-4c4a-b4af-9e21d9a601e6" containerID="974e3793e5070c6952a1f07899162fcaf1f06cc3918667236d94bb3cc21ea53e" exitCode=0 Feb 28 11:14:39 crc kubenswrapper[4996]: I0228 11:14:39.965100 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" event={"ID":"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6","Type":"ContainerDied","Data":"974e3793e5070c6952a1f07899162fcaf1f06cc3918667236d94bb3cc21ea53e"} Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.034873 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:14:41 crc kubenswrapper[4996]: E0228 11:14:41.035609 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.084535 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.222125 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjc88\" (UniqueName: \"kubernetes.io/projected/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-kube-api-access-sjc88\") pod \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\" (UID: \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\") " Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.222411 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-host\") pod \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\" (UID: \"fa50c8ae-7107-4c4a-b4af-9e21d9a601e6\") " Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.224109 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-host" (OuterVolumeSpecName: "host") pod "fa50c8ae-7107-4c4a-b4af-9e21d9a601e6" (UID: "fa50c8ae-7107-4c4a-b4af-9e21d9a601e6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.238519 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-kube-api-access-sjc88" (OuterVolumeSpecName: "kube-api-access-sjc88") pod "fa50c8ae-7107-4c4a-b4af-9e21d9a601e6" (UID: "fa50c8ae-7107-4c4a-b4af-9e21d9a601e6"). InnerVolumeSpecName "kube-api-access-sjc88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.324374 4996 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-host\") on node \"crc\" DevicePath \"\"" Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.324413 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjc88\" (UniqueName: \"kubernetes.io/projected/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6-kube-api-access-sjc88\") on node \"crc\" DevicePath \"\"" Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.650459 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-mdjs6"] Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.660940 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-mdjs6"] Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.982411 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9b4954805920bc64f8d406d6c2993c9c41021cc9d6f7205042e4eae58e2a4d" Feb 28 11:14:41 crc kubenswrapper[4996]: I0228 11:14:41.982522 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-mdjs6" Feb 28 11:14:42 crc kubenswrapper[4996]: I0228 11:14:42.825137 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-4lsdv"] Feb 28 11:14:42 crc kubenswrapper[4996]: E0228 11:14:42.825633 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa50c8ae-7107-4c4a-b4af-9e21d9a601e6" containerName="container-00" Feb 28 11:14:42 crc kubenswrapper[4996]: I0228 11:14:42.825649 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa50c8ae-7107-4c4a-b4af-9e21d9a601e6" containerName="container-00" Feb 28 11:14:42 crc kubenswrapper[4996]: I0228 11:14:42.825871 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa50c8ae-7107-4c4a-b4af-9e21d9a601e6" containerName="container-00" Feb 28 11:14:42 crc kubenswrapper[4996]: I0228 11:14:42.826792 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:42 crc kubenswrapper[4996]: I0228 11:14:42.828769 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mlcgh"/"default-dockercfg-4n92s" Feb 28 11:14:42 crc kubenswrapper[4996]: I0228 11:14:42.956187 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ae5b753-060c-4185-b8c8-6760a38118ab-host\") pod \"crc-debug-4lsdv\" (UID: \"0ae5b753-060c-4185-b8c8-6760a38118ab\") " pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:42 crc kubenswrapper[4996]: I0228 11:14:42.956604 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5rw\" (UniqueName: \"kubernetes.io/projected/0ae5b753-060c-4185-b8c8-6760a38118ab-kube-api-access-gt5rw\") pod \"crc-debug-4lsdv\" (UID: \"0ae5b753-060c-4185-b8c8-6760a38118ab\") " pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:43 crc kubenswrapper[4996]: I0228 11:14:43.045468 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa50c8ae-7107-4c4a-b4af-9e21d9a601e6" path="/var/lib/kubelet/pods/fa50c8ae-7107-4c4a-b4af-9e21d9a601e6/volumes" Feb 28 11:14:43 crc kubenswrapper[4996]: I0228 11:14:43.058163 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ae5b753-060c-4185-b8c8-6760a38118ab-host\") pod \"crc-debug-4lsdv\" (UID: \"0ae5b753-060c-4185-b8c8-6760a38118ab\") " pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:43 crc kubenswrapper[4996]: I0228 11:14:43.058425 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ae5b753-060c-4185-b8c8-6760a38118ab-host\") pod \"crc-debug-4lsdv\" (UID: \"0ae5b753-060c-4185-b8c8-6760a38118ab\") " pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:43 crc kubenswrapper[4996]: I0228 11:14:43.058744 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt5rw\" (UniqueName: \"kubernetes.io/projected/0ae5b753-060c-4185-b8c8-6760a38118ab-kube-api-access-gt5rw\") pod \"crc-debug-4lsdv\" (UID: \"0ae5b753-060c-4185-b8c8-6760a38118ab\") " pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:43 crc kubenswrapper[4996]: I0228 11:14:43.092137 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt5rw\" (UniqueName: \"kubernetes.io/projected/0ae5b753-060c-4185-b8c8-6760a38118ab-kube-api-access-gt5rw\") pod \"crc-debug-4lsdv\" (UID: \"0ae5b753-060c-4185-b8c8-6760a38118ab\") " pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:43 crc kubenswrapper[4996]: I0228 11:14:43.145853 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:44 crc kubenswrapper[4996]: I0228 11:14:44.012552 4996 generic.go:334] "Generic (PLEG): container finished" podID="0ae5b753-060c-4185-b8c8-6760a38118ab" containerID="99ca4a07cc8cbf954e2ba546942012627d3ea6ad3d2eb63e9b575a469ab2cadb" exitCode=0 Feb 28 11:14:44 crc kubenswrapper[4996]: I0228 11:14:44.012629 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" event={"ID":"0ae5b753-060c-4185-b8c8-6760a38118ab","Type":"ContainerDied","Data":"99ca4a07cc8cbf954e2ba546942012627d3ea6ad3d2eb63e9b575a469ab2cadb"} Feb 28 11:14:44 crc kubenswrapper[4996]: I0228 11:14:44.012975 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" event={"ID":"0ae5b753-060c-4185-b8c8-6760a38118ab","Type":"ContainerStarted","Data":"bef6a0dc0be6b2d0d5d71accc89bc57b66d11ab9556f5215800aae1d8f1433e4"} Feb 28 11:14:44 crc kubenswrapper[4996]: I0228 11:14:44.055719 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-4lsdv"] Feb 28 11:14:44 crc kubenswrapper[4996]: I0228 11:14:44.063959 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mlcgh/crc-debug-4lsdv"] Feb 28 11:14:45 crc kubenswrapper[4996]: I0228 11:14:45.129286 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:45 crc kubenswrapper[4996]: I0228 11:14:45.299856 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt5rw\" (UniqueName: \"kubernetes.io/projected/0ae5b753-060c-4185-b8c8-6760a38118ab-kube-api-access-gt5rw\") pod \"0ae5b753-060c-4185-b8c8-6760a38118ab\" (UID: \"0ae5b753-060c-4185-b8c8-6760a38118ab\") " Feb 28 11:14:45 crc kubenswrapper[4996]: I0228 11:14:45.300059 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ae5b753-060c-4185-b8c8-6760a38118ab-host\") pod \"0ae5b753-060c-4185-b8c8-6760a38118ab\" (UID: \"0ae5b753-060c-4185-b8c8-6760a38118ab\") " Feb 28 11:14:45 crc kubenswrapper[4996]: I0228 11:14:45.300587 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ae5b753-060c-4185-b8c8-6760a38118ab-host" (OuterVolumeSpecName: "host") pod "0ae5b753-060c-4185-b8c8-6760a38118ab" (UID: "0ae5b753-060c-4185-b8c8-6760a38118ab"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 11:14:45 crc kubenswrapper[4996]: I0228 11:14:45.308945 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae5b753-060c-4185-b8c8-6760a38118ab-kube-api-access-gt5rw" (OuterVolumeSpecName: "kube-api-access-gt5rw") pod "0ae5b753-060c-4185-b8c8-6760a38118ab" (UID: "0ae5b753-060c-4185-b8c8-6760a38118ab"). InnerVolumeSpecName "kube-api-access-gt5rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:14:45 crc kubenswrapper[4996]: I0228 11:14:45.402592 4996 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0ae5b753-060c-4185-b8c8-6760a38118ab-host\") on node \"crc\" DevicePath \"\"" Feb 28 11:14:45 crc kubenswrapper[4996]: I0228 11:14:45.402638 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt5rw\" (UniqueName: \"kubernetes.io/projected/0ae5b753-060c-4185-b8c8-6760a38118ab-kube-api-access-gt5rw\") on node \"crc\" DevicePath \"\"" Feb 28 11:14:46 crc kubenswrapper[4996]: I0228 11:14:46.030541 4996 scope.go:117] "RemoveContainer" containerID="99ca4a07cc8cbf954e2ba546942012627d3ea6ad3d2eb63e9b575a469ab2cadb" Feb 28 11:14:46 crc kubenswrapper[4996]: I0228 11:14:46.030597 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/crc-debug-4lsdv" Feb 28 11:14:47 crc kubenswrapper[4996]: I0228 11:14:47.044799 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae5b753-060c-4185-b8c8-6760a38118ab" path="/var/lib/kubelet/pods/0ae5b753-060c-4185-b8c8-6760a38118ab/volumes" Feb 28 11:14:51 crc kubenswrapper[4996]: I0228 11:14:51.777674 4996 scope.go:117] "RemoveContainer" containerID="8aa9aee2e281bfeafeeb191772e8ea57d526454e797e82c491b2b05dc99f2b2f" Feb 28 11:14:56 crc kubenswrapper[4996]: I0228 11:14:56.032970 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:14:56 crc kubenswrapper[4996]: E0228 11:14:56.033521 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:14:57 crc kubenswrapper[4996]: E0228 11:14:57.038736 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.147792 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx"] Feb 28 11:15:00 crc kubenswrapper[4996]: E0228 11:15:00.148582 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae5b753-060c-4185-b8c8-6760a38118ab" containerName="container-00" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.148599 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae5b753-060c-4185-b8c8-6760a38118ab" containerName="container-00" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.148846 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae5b753-060c-4185-b8c8-6760a38118ab" containerName="container-00" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.149514 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.160647 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.161208 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx"] Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.163125 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.302355 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6pns\" (UniqueName: \"kubernetes.io/projected/b01f0dc1-85e3-4ba6-9449-87f3634b8988-kube-api-access-p6pns\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.302752 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b01f0dc1-85e3-4ba6-9449-87f3634b8988-secret-volume\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.302798 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b01f0dc1-85e3-4ba6-9449-87f3634b8988-config-volume\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.406458 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b01f0dc1-85e3-4ba6-9449-87f3634b8988-secret-volume\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.406638 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b01f0dc1-85e3-4ba6-9449-87f3634b8988-config-volume\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.406703 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6pns\" (UniqueName: \"kubernetes.io/projected/b01f0dc1-85e3-4ba6-9449-87f3634b8988-kube-api-access-p6pns\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.408393 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b01f0dc1-85e3-4ba6-9449-87f3634b8988-config-volume\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.414738 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b01f0dc1-85e3-4ba6-9449-87f3634b8988-secret-volume\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.424626 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6pns\" (UniqueName: \"kubernetes.io/projected/b01f0dc1-85e3-4ba6-9449-87f3634b8988-kube-api-access-p6pns\") pod \"collect-profiles-29537955-jrwlx\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.473343 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:00 crc kubenswrapper[4996]: I0228 11:15:00.933160 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx"] Feb 28 11:15:01 crc kubenswrapper[4996]: I0228 11:15:01.160987 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" event={"ID":"b01f0dc1-85e3-4ba6-9449-87f3634b8988","Type":"ContainerStarted","Data":"e2ee54aac65a8c76e404fae445753be93527721ff4b7e65dc16390f51956eb81"} Feb 28 11:15:01 crc kubenswrapper[4996]: I0228 11:15:01.161066 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" event={"ID":"b01f0dc1-85e3-4ba6-9449-87f3634b8988","Type":"ContainerStarted","Data":"b1c4e044db980ff8a576cb9a0097a038c3247284529064600d1eb45365ea0f1f"} Feb 28 11:15:01 crc kubenswrapper[4996]: I0228 11:15:01.183436 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" podStartSLOduration=1.183418432 podStartE2EDuration="1.183418432s" podCreationTimestamp="2026-02-28 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 11:15:01.175373495 +0000 UTC m=+8064.866176306" watchObservedRunningTime="2026-02-28 11:15:01.183418432 +0000 UTC m=+8064.874221243" Feb 28 11:15:02 crc kubenswrapper[4996]: I0228 11:15:02.171733 4996 generic.go:334] "Generic (PLEG): container finished" podID="b01f0dc1-85e3-4ba6-9449-87f3634b8988" containerID="e2ee54aac65a8c76e404fae445753be93527721ff4b7e65dc16390f51956eb81" exitCode=0 Feb 28 11:15:02 crc kubenswrapper[4996]: I0228 11:15:02.171840 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" event={"ID":"b01f0dc1-85e3-4ba6-9449-87f3634b8988","Type":"ContainerDied","Data":"e2ee54aac65a8c76e404fae445753be93527721ff4b7e65dc16390f51956eb81"} Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.516699 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.610035 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6pns\" (UniqueName: \"kubernetes.io/projected/b01f0dc1-85e3-4ba6-9449-87f3634b8988-kube-api-access-p6pns\") pod \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.610238 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b01f0dc1-85e3-4ba6-9449-87f3634b8988-config-volume\") pod \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.610290 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b01f0dc1-85e3-4ba6-9449-87f3634b8988-secret-volume\") pod \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\" (UID: \"b01f0dc1-85e3-4ba6-9449-87f3634b8988\") " Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.610847 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b01f0dc1-85e3-4ba6-9449-87f3634b8988-config-volume" (OuterVolumeSpecName: "config-volume") pod "b01f0dc1-85e3-4ba6-9449-87f3634b8988" (UID: "b01f0dc1-85e3-4ba6-9449-87f3634b8988"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.615948 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b01f0dc1-85e3-4ba6-9449-87f3634b8988-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b01f0dc1-85e3-4ba6-9449-87f3634b8988" (UID: "b01f0dc1-85e3-4ba6-9449-87f3634b8988"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.616816 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b01f0dc1-85e3-4ba6-9449-87f3634b8988-kube-api-access-p6pns" (OuterVolumeSpecName: "kube-api-access-p6pns") pod "b01f0dc1-85e3-4ba6-9449-87f3634b8988" (UID: "b01f0dc1-85e3-4ba6-9449-87f3634b8988"). InnerVolumeSpecName "kube-api-access-p6pns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.713580 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6pns\" (UniqueName: \"kubernetes.io/projected/b01f0dc1-85e3-4ba6-9449-87f3634b8988-kube-api-access-p6pns\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.713625 4996 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b01f0dc1-85e3-4ba6-9449-87f3634b8988-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:03 crc kubenswrapper[4996]: I0228 11:15:03.713637 4996 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b01f0dc1-85e3-4ba6-9449-87f3634b8988-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:04 crc kubenswrapper[4996]: I0228 11:15:04.190217 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" event={"ID":"b01f0dc1-85e3-4ba6-9449-87f3634b8988","Type":"ContainerDied","Data":"b1c4e044db980ff8a576cb9a0097a038c3247284529064600d1eb45365ea0f1f"} Feb 28 11:15:04 crc kubenswrapper[4996]: I0228 11:15:04.190264 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1c4e044db980ff8a576cb9a0097a038c3247284529064600d1eb45365ea0f1f" Feb 28 11:15:04 crc kubenswrapper[4996]: I0228 11:15:04.190327 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537955-jrwlx" Feb 28 11:15:04 crc kubenswrapper[4996]: I0228 11:15:04.262517 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq"] Feb 28 11:15:04 crc kubenswrapper[4996]: I0228 11:15:04.276335 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537910-xmqfq"] Feb 28 11:15:05 crc kubenswrapper[4996]: I0228 11:15:05.057433 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19d6c20-abe2-4b10-a110-0a535cd35297" path="/var/lib/kubelet/pods/a19d6c20-abe2-4b10-a110-0a535cd35297/volumes" Feb 28 11:15:09 crc kubenswrapper[4996]: I0228 11:15:09.033949 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:15:09 crc kubenswrapper[4996]: E0228 11:15:09.034719 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:15:22 crc kubenswrapper[4996]: I0228 11:15:22.034166 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:15:22 crc kubenswrapper[4996]: E0228 11:15:22.035179 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.572475 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5nd95"] Feb 28 11:15:23 crc kubenswrapper[4996]: E0228 11:15:23.573323 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b01f0dc1-85e3-4ba6-9449-87f3634b8988" containerName="collect-profiles" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.573344 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b01f0dc1-85e3-4ba6-9449-87f3634b8988" containerName="collect-profiles" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.573594 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b01f0dc1-85e3-4ba6-9449-87f3634b8988" containerName="collect-profiles" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.576138 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.582344 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nd95"] Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.619115 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq4dl\" (UniqueName: \"kubernetes.io/projected/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-kube-api-access-jq4dl\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.619476 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-catalog-content\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.619509 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-utilities\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.720672 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-catalog-content\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.720730 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-utilities\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.720807 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq4dl\" (UniqueName: \"kubernetes.io/projected/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-kube-api-access-jq4dl\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.721236 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-catalog-content\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.721299 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-utilities\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.744887 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq4dl\" (UniqueName: \"kubernetes.io/projected/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-kube-api-access-jq4dl\") pod \"redhat-marketplace-5nd95\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.765489 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k58fm"] Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.767903 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.778341 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k58fm"] Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.912375 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.925025 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k28tb\" (UniqueName: \"kubernetes.io/projected/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-kube-api-access-k28tb\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.925084 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-utilities\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:23 crc kubenswrapper[4996]: I0228 11:15:23.925172 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-catalog-content\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.026982 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k28tb\" (UniqueName: \"kubernetes.io/projected/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-kube-api-access-k28tb\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.027362 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-utilities\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.027464 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-catalog-content\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.028091 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-utilities\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.031996 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-catalog-content\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.067900 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k28tb\" (UniqueName: \"kubernetes.io/projected/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-kube-api-access-k28tb\") pod \"redhat-operators-k58fm\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.116587 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.411651 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nd95"] Feb 28 11:15:24 crc kubenswrapper[4996]: I0228 11:15:24.623630 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k58fm"] Feb 28 11:15:24 crc kubenswrapper[4996]: W0228 11:15:24.651786 4996 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb675c69_d3ce_4b4c_a370_1fa32ec14ad0.slice/crio-94a9bba05ac978a1b6ed66d884117cb98e8871a8ad0332905324a8f6e363f707 WatchSource:0}: Error finding container 94a9bba05ac978a1b6ed66d884117cb98e8871a8ad0332905324a8f6e363f707: Status 404 returned error can't find the container with id 94a9bba05ac978a1b6ed66d884117cb98e8871a8ad0332905324a8f6e363f707 Feb 28 11:15:25 crc kubenswrapper[4996]: I0228 11:15:25.383875 4996 generic.go:334] "Generic (PLEG): container finished" podID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerID="fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598" exitCode=0 Feb 28 11:15:25 crc kubenswrapper[4996]: I0228 11:15:25.383982 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nd95" event={"ID":"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7","Type":"ContainerDied","Data":"fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598"} Feb 28 11:15:25 crc kubenswrapper[4996]: I0228 11:15:25.384189 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nd95" event={"ID":"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7","Type":"ContainerStarted","Data":"42481a4cd9bd978718fdfd0eacad16acd5631b52c9856177507c6ed3d5d270af"} Feb 28 11:15:25 crc kubenswrapper[4996]: I0228 11:15:25.386091 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 11:15:25 crc kubenswrapper[4996]: I0228 11:15:25.386411 4996 generic.go:334] "Generic (PLEG): container finished" podID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerID="4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d" exitCode=0 Feb 28 11:15:25 crc kubenswrapper[4996]: I0228 11:15:25.386450 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k58fm" event={"ID":"db675c69-d3ce-4b4c-a370-1fa32ec14ad0","Type":"ContainerDied","Data":"4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d"} Feb 28 11:15:25 crc kubenswrapper[4996]: I0228 11:15:25.386479 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k58fm" event={"ID":"db675c69-d3ce-4b4c-a370-1fa32ec14ad0","Type":"ContainerStarted","Data":"94a9bba05ac978a1b6ed66d884117cb98e8871a8ad0332905324a8f6e363f707"} Feb 28 11:15:26 crc kubenswrapper[4996]: I0228 11:15:26.423702 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k58fm" event={"ID":"db675c69-d3ce-4b4c-a370-1fa32ec14ad0","Type":"ContainerStarted","Data":"0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b"} Feb 28 11:15:26 crc kubenswrapper[4996]: I0228 11:15:26.427417 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nd95" event={"ID":"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7","Type":"ContainerStarted","Data":"4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e"} Feb 28 11:15:27 crc kubenswrapper[4996]: I0228 11:15:27.451489 4996 generic.go:334] "Generic (PLEG): container finished" podID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerID="4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e" exitCode=0 Feb 28 11:15:27 crc kubenswrapper[4996]: I0228 11:15:27.451619 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nd95" event={"ID":"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7","Type":"ContainerDied","Data":"4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e"} Feb 28 11:15:28 crc kubenswrapper[4996]: I0228 11:15:28.465897 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nd95" event={"ID":"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7","Type":"ContainerStarted","Data":"c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe"} Feb 28 11:15:28 crc kubenswrapper[4996]: I0228 11:15:28.495992 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5nd95" podStartSLOduration=3.025660438 podStartE2EDuration="5.495968382s" podCreationTimestamp="2026-02-28 11:15:23 +0000 UTC" firstStartedPulling="2026-02-28 11:15:25.385872208 +0000 UTC m=+8089.076675019" lastFinishedPulling="2026-02-28 11:15:27.856180122 +0000 UTC m=+8091.546982963" observedRunningTime="2026-02-28 11:15:28.484127153 +0000 UTC m=+8092.174929974" watchObservedRunningTime="2026-02-28 11:15:28.495968382 +0000 UTC m=+8092.186771183" Feb 28 11:15:33 crc kubenswrapper[4996]: I0228 11:15:33.526430 4996 generic.go:334] "Generic (PLEG): container finished" podID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerID="0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b" exitCode=0 Feb 28 11:15:33 crc kubenswrapper[4996]: I0228 11:15:33.526723 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k58fm" event={"ID":"db675c69-d3ce-4b4c-a370-1fa32ec14ad0","Type":"ContainerDied","Data":"0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b"} Feb 28 11:15:33 crc kubenswrapper[4996]: I0228 11:15:33.912828 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:33 crc kubenswrapper[4996]: I0228 11:15:33.913445 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:33 crc kubenswrapper[4996]: I0228 11:15:33.994680 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:34 crc kubenswrapper[4996]: I0228 11:15:34.537844 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k58fm" event={"ID":"db675c69-d3ce-4b4c-a370-1fa32ec14ad0","Type":"ContainerStarted","Data":"e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28"} Feb 28 11:15:34 crc kubenswrapper[4996]: I0228 11:15:34.565139 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k58fm" podStartSLOduration=3.062123108 podStartE2EDuration="11.565117504s" podCreationTimestamp="2026-02-28 11:15:23 +0000 UTC" firstStartedPulling="2026-02-28 11:15:25.387759065 +0000 UTC m=+8089.078561866" lastFinishedPulling="2026-02-28 11:15:33.890753441 +0000 UTC m=+8097.581556262" observedRunningTime="2026-02-28 11:15:34.552797684 +0000 UTC m=+8098.243600505" watchObservedRunningTime="2026-02-28 11:15:34.565117504 +0000 UTC m=+8098.255920315" Feb 28 11:15:34 crc kubenswrapper[4996]: I0228 11:15:34.593086 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:35 crc kubenswrapper[4996]: I0228 11:15:35.763805 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nd95"] Feb 28 11:15:36 crc kubenswrapper[4996]: I0228 11:15:36.033662 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:15:36 crc kubenswrapper[4996]: E0228 11:15:36.034167 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:15:36 crc kubenswrapper[4996]: I0228 11:15:36.554542 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5nd95" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerName="registry-server" containerID="cri-o://c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe" gracePeriod=2 Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.104679 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.197537 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-catalog-content\") pod \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.197630 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-utilities\") pod \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.197769 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq4dl\" (UniqueName: \"kubernetes.io/projected/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-kube-api-access-jq4dl\") pod \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\" (UID: \"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7\") " Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.199603 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-utilities" (OuterVolumeSpecName: "utilities") pod "b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" (UID: "b47a0f60-e8c4-44d3-b13b-dd6554aa06b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.205082 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-kube-api-access-jq4dl" (OuterVolumeSpecName: "kube-api-access-jq4dl") pod "b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" (UID: "b47a0f60-e8c4-44d3-b13b-dd6554aa06b7"). InnerVolumeSpecName "kube-api-access-jq4dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.238090 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" (UID: "b47a0f60-e8c4-44d3-b13b-dd6554aa06b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.300372 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.300627 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq4dl\" (UniqueName: \"kubernetes.io/projected/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-kube-api-access-jq4dl\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.300640 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.565876 4996 generic.go:334] "Generic (PLEG): container finished" podID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerID="c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe" exitCode=0 Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.565924 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nd95" event={"ID":"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7","Type":"ContainerDied","Data":"c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe"} Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.565988 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nd95" event={"ID":"b47a0f60-e8c4-44d3-b13b-dd6554aa06b7","Type":"ContainerDied","Data":"42481a4cd9bd978718fdfd0eacad16acd5631b52c9856177507c6ed3d5d270af"} Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.566023 4996 scope.go:117] "RemoveContainer" containerID="c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.565999 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nd95" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.588875 4996 scope.go:117] "RemoveContainer" containerID="4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.603190 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nd95"] Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.613106 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nd95"] Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.620719 4996 scope.go:117] "RemoveContainer" containerID="fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.655316 4996 scope.go:117] "RemoveContainer" containerID="c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe" Feb 28 11:15:37 crc kubenswrapper[4996]: E0228 11:15:37.655779 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe\": container with ID starting with c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe not found: ID does not exist" containerID="c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.655816 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe"} err="failed to get container status \"c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe\": rpc error: code = NotFound desc = could not find container \"c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe\": container with ID starting with c7eea3d1665e1e5d6cdda41b3af524840ef0ae0967bad035a8c95ba7aeb45fbe not found: ID does not exist" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.655837 4996 scope.go:117] "RemoveContainer" containerID="4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e" Feb 28 11:15:37 crc kubenswrapper[4996]: E0228 11:15:37.656062 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e\": container with ID starting with 4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e not found: ID does not exist" containerID="4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.656085 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e"} err="failed to get container status \"4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e\": rpc error: code = NotFound desc = could not find container \"4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e\": container with ID starting with 4747153a35ee012010115c85f915c70409b250f3ae540687e61bb157ffe4364e not found: ID does not exist" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.656100 4996 scope.go:117] "RemoveContainer" containerID="fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598" Feb 28 11:15:37 crc kubenswrapper[4996]: E0228 11:15:37.656277 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598\": container with ID starting with fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598 not found: ID does not exist" containerID="fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598" Feb 28 11:15:37 crc kubenswrapper[4996]: I0228 11:15:37.656296 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598"} err="failed to get container status \"fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598\": rpc error: code = NotFound desc = could not find container \"fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598\": container with ID starting with fae05ef21e184183f13704352b76808a2036f32f679c1e3ebd77d1c71797c598 not found: ID does not exist" Feb 28 11:15:39 crc kubenswrapper[4996]: I0228 11:15:39.043595 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" path="/var/lib/kubelet/pods/b47a0f60-e8c4-44d3-b13b-dd6554aa06b7/volumes" Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.117632 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.118158 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.166735 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.378179 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ansibletest-ansibletest_f6886487-0ab2-404d-aa70-4be59320885a/ansibletest-ansibletest/0.log" Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.569953 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58dc667d-krgck_ae98f057-6852-4905-a4d6-5b6d121cb4a1/barbican-api/0.log" Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.671469 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-58dc667d-krgck_ae98f057-6852-4905-a4d6-5b6d121cb4a1/barbican-api-log/0.log" Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.692858 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.748637 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k58fm"] Feb 28 11:15:44 crc kubenswrapper[4996]: I0228 11:15:44.815309 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7457c94496-jvp8b_d4d19959-1945-43a8-b005-f4f136fcdf10/barbican-keystone-listener/0.log" Feb 28 11:15:45 crc kubenswrapper[4996]: I0228 11:15:45.093248 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-665ffb6dc-7h4gw_85fcdb36-feb8-4f2f-a91e-ffbce6e91d04/barbican-worker-log/0.log" Feb 28 11:15:45 crc kubenswrapper[4996]: I0228 11:15:45.101695 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-665ffb6dc-7h4gw_85fcdb36-feb8-4f2f-a91e-ffbce6e91d04/barbican-worker/0.log" Feb 28 11:15:45 crc kubenswrapper[4996]: I0228 11:15:45.360841 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7457c94496-jvp8b_d4d19959-1945-43a8-b005-f4f136fcdf10/barbican-keystone-listener-log/0.log" Feb 28 11:15:45 crc kubenswrapper[4996]: I0228 11:15:45.405733 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9rs75_d0a3e2dd-04e0-4625-b69c-6fddf875deeb/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:45 crc kubenswrapper[4996]: I0228 11:15:45.515345 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_84f19b5a-912c-4a5d-a7f7-05d8a637bc1c/ceilometer-central-agent/0.log" Feb 28 11:15:45 crc kubenswrapper[4996]: I0228 11:15:45.570976 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_84f19b5a-912c-4a5d-a7f7-05d8a637bc1c/ceilometer-notification-agent/0.log" Feb 28 11:15:45 crc kubenswrapper[4996]: I0228 11:15:45.623458 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_84f19b5a-912c-4a5d-a7f7-05d8a637bc1c/proxy-httpd/0.log" Feb 28 11:15:45 crc kubenswrapper[4996]: I0228 11:15:45.673965 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_84f19b5a-912c-4a5d-a7f7-05d8a637bc1c/sg-core/0.log" Feb 28 11:15:46 crc kubenswrapper[4996]: I0228 11:15:46.035385 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-d29hh_f0ea0b93-3364-4191-b14e-6ad457132874/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:46 crc kubenswrapper[4996]: I0228 11:15:46.151444 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4h2v9_4e3c3631-dbf1-4c5a-8fe4-0b0a94de9efd/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:46 crc kubenswrapper[4996]: I0228 11:15:46.378664 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f13ff650-58de-4d3f-a56b-f77ef33ddf89/cinder-api/0.log" Feb 28 11:15:46 crc kubenswrapper[4996]: I0228 11:15:46.379970 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_f13ff650-58de-4d3f-a56b-f77ef33ddf89/cinder-api-log/0.log" Feb 28 11:15:46 crc kubenswrapper[4996]: I0228 11:15:46.668946 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k58fm" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerName="registry-server" containerID="cri-o://e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28" gracePeriod=2 Feb 28 11:15:46 crc kubenswrapper[4996]: I0228 11:15:46.727343 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_16eb7691-5159-4f12-88d5-79d8e9b902b2/probe/0.log" Feb 28 11:15:46 crc kubenswrapper[4996]: I0228 11:15:46.802325 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a57ea6ee-2619-4875-96e6-60622a9754d3/cinder-scheduler/0.log" Feb 28 11:15:46 crc kubenswrapper[4996]: I0228 11:15:46.808157 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_16eb7691-5159-4f12-88d5-79d8e9b902b2/cinder-backup/0.log" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.203638 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.215046 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a57ea6ee-2619-4875-96e6-60622a9754d3/probe/0.log" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.292735 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_d40d2784-2f7e-4cde-bb71-ff077d54ea57/probe/0.log" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.306190 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-utilities\") pod \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.306361 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-catalog-content\") pod \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.306435 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k28tb\" (UniqueName: \"kubernetes.io/projected/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-kube-api-access-k28tb\") pod \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\" (UID: \"db675c69-d3ce-4b4c-a370-1fa32ec14ad0\") " Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.307644 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-utilities" (OuterVolumeSpecName: "utilities") pod "db675c69-d3ce-4b4c-a370-1fa32ec14ad0" (UID: "db675c69-d3ce-4b4c-a370-1fa32ec14ad0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.308315 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_d40d2784-2f7e-4cde-bb71-ff077d54ea57/cinder-volume/0.log" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.315307 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-kube-api-access-k28tb" (OuterVolumeSpecName: "kube-api-access-k28tb") pod "db675c69-d3ce-4b4c-a370-1fa32ec14ad0" (UID: "db675c69-d3ce-4b4c-a370-1fa32ec14ad0"). InnerVolumeSpecName "kube-api-access-k28tb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.408426 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k28tb\" (UniqueName: \"kubernetes.io/projected/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-kube-api-access-k28tb\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.408465 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.444864 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db675c69-d3ce-4b4c-a370-1fa32ec14ad0" (UID: "db675c69-d3ce-4b4c-a370-1fa32ec14ad0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.467807 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-7qknl_a8f28eab-0652-46d4-817f-9b48a6f71e4a/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.510310 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db675c69-d3ce-4b4c-a370-1fa32ec14ad0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.553879 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-64g89_1fbdc43d-b502-4eca-9040-604271ec1f6e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.679765 4996 generic.go:334] "Generic (PLEG): container finished" podID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerID="e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28" exitCode=0 Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.680089 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k58fm" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.679972 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k58fm" event={"ID":"db675c69-d3ce-4b4c-a370-1fa32ec14ad0","Type":"ContainerDied","Data":"e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28"} Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.681122 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k58fm" event={"ID":"db675c69-d3ce-4b4c-a370-1fa32ec14ad0","Type":"ContainerDied","Data":"94a9bba05ac978a1b6ed66d884117cb98e8871a8ad0332905324a8f6e363f707"} Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.681154 4996 scope.go:117] "RemoveContainer" containerID="e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.705136 4996 scope.go:117] "RemoveContainer" containerID="0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.725780 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k58fm"] Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.734460 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-t5mnf_3bea2fd5-b365-4936-a700-6810be669d7b/init/0.log" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.737296 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k58fm"] Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.740151 4996 scope.go:117] "RemoveContainer" containerID="4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.778182 4996 scope.go:117] "RemoveContainer" containerID="e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28" Feb 28 11:15:47 crc kubenswrapper[4996]: E0228 11:15:47.778466 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28\": container with ID starting with e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28 not found: ID does not exist" containerID="e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.778503 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28"} err="failed to get container status \"e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28\": rpc error: code = NotFound desc = could not find container \"e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28\": container with ID starting with e56c8f59c64990ecf9f1d955b06c715ad74113e47a814bc98de038fdd0815a28 not found: ID does not exist" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.778529 4996 scope.go:117] "RemoveContainer" containerID="0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b" Feb 28 11:15:47 crc kubenswrapper[4996]: E0228 11:15:47.778808 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b\": container with ID starting with 0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b not found: ID does not exist" containerID="0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.778846 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b"} err="failed to get container status \"0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b\": rpc error: code = NotFound desc = could not find container \"0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b\": container with ID starting with 0a0f4ce2317ea780a0ae5d3d2b29f41470d846bf8ea53b596831f6b83f496c8b not found: ID does not exist" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.778864 4996 scope.go:117] "RemoveContainer" containerID="4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d" Feb 28 11:15:47 crc kubenswrapper[4996]: E0228 11:15:47.779206 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d\": container with ID starting with 4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d not found: ID does not exist" containerID="4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.779244 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d"} err="failed to get container status \"4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d\": rpc error: code = NotFound desc = could not find container \"4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d\": container with ID starting with 4ed73725ab932744e0f10341dd7d6c1ccd9162479588e5a360980178b396c83d not found: ID does not exist" Feb 28 11:15:47 crc kubenswrapper[4996]: I0228 11:15:47.918503 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-t5mnf_3bea2fd5-b365-4936-a700-6810be669d7b/init/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.037558 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d7bb241a-bbf4-499a-b203-d51d32c8964d/glance-httpd/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.160233 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d7bb241a-bbf4-499a-b203-d51d32c8964d/glance-log/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.203771 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-t5mnf_3bea2fd5-b365-4936-a700-6810be669d7b/dnsmasq-dns/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.310455 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6608d2cf-7157-45c2-9a82-99354bf88cee/glance-httpd/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.357145 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_6608d2cf-7157-45c2-9a82-99354bf88cee/glance-log/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.537592 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6ccc6bcbc4-2fmz9_b605afa6-a344-45f0-b62a-56f46b346c52/horizon/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.715253 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizontest-tests-horizontest_915895e5-31ba-450f-b3e8-a385e5937353/horizontest-tests-horizontest/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.872837 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-9jz8v_645c2ca2-c74e-44d7-a0e7-6f161b14aa55/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:48 crc kubenswrapper[4996]: I0228 11:15:48.998926 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ghlpr_500ae8f8-17b1-45fb-9569-d49fd19cdea6/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:49 crc kubenswrapper[4996]: I0228 11:15:49.034061 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:15:49 crc kubenswrapper[4996]: E0228 11:15:49.034320 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:15:49 crc kubenswrapper[4996]: I0228 11:15:49.049981 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" path="/var/lib/kubelet/pods/db675c69-d3ce-4b4c-a370-1fa32ec14ad0/volumes" Feb 28 11:15:49 crc kubenswrapper[4996]: I0228 11:15:49.256984 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29537881-447jn_d637ef52-36d0-4c60-8bef-201d71cac614/keystone-cron/0.log" Feb 28 11:15:49 crc kubenswrapper[4996]: I0228 11:15:49.482955 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29537941-pj4nm_9acff40e-9809-41c5-b307-388aa1a815d2/keystone-cron/0.log" Feb 28 11:15:49 crc kubenswrapper[4996]: I0228 11:15:49.530776 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_93e23f7f-31d2-496c-898d-4f46db4da6cc/kube-state-metrics/0.log" Feb 28 11:15:49 crc kubenswrapper[4996]: I0228 11:15:49.767639 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jmx2l_f0393bfd-0a6b-48e8-8ccb-45ec21b73b58/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:50 crc kubenswrapper[4996]: I0228 11:15:50.011800 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7f342b89-e95f-4811-a844-690bb97b8b32/manila-api-log/0.log" Feb 28 11:15:50 crc kubenswrapper[4996]: I0228 11:15:50.142511 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6ccc6bcbc4-2fmz9_b605afa6-a344-45f0-b62a-56f46b346c52/horizon-log/0.log" Feb 28 11:15:50 crc kubenswrapper[4996]: I0228 11:15:50.244766 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_7f342b89-e95f-4811-a844-690bb97b8b32/manila-api/0.log" Feb 28 11:15:50 crc kubenswrapper[4996]: I0228 11:15:50.245836 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_bb3beaae-37f6-4cbd-af32-919a3b9df37e/probe/0.log" Feb 28 11:15:50 crc kubenswrapper[4996]: I0228 11:15:50.401530 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_bb3beaae-37f6-4cbd-af32-919a3b9df37e/manila-scheduler/0.log" Feb 28 11:15:50 crc kubenswrapper[4996]: I0228 11:15:50.508294 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_bd1af971-3595-4d44-98b7-8878b4d13222/probe/0.log" Feb 28 11:15:50 crc kubenswrapper[4996]: I0228 11:15:50.528555 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_bd1af971-3595-4d44-98b7-8878b4d13222/manila-share/0.log" Feb 28 11:15:51 crc kubenswrapper[4996]: I0228 11:15:51.244190 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bjwzt_b17e3d39-7e71-472f-9011-d825c77b005a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:51 crc kubenswrapper[4996]: I0228 11:15:51.881973 4996 scope.go:117] "RemoveContainer" containerID="83250618dd248b38578b7336e79fdb5d88803f61bd010c82426fdb6611d4c928" Feb 28 11:15:51 crc kubenswrapper[4996]: I0228 11:15:51.926498 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-679bcc7697-9hs5j_50e72561-9c77-43f9-8f8d-0c9be05be3f6/neutron-httpd/0.log" Feb 28 11:15:52 crc kubenswrapper[4996]: I0228 11:15:52.656422 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-679bcc7697-9hs5j_50e72561-9c77-43f9-8f8d-0c9be05be3f6/neutron-api/0.log" Feb 28 11:15:52 crc kubenswrapper[4996]: I0228 11:15:52.693776 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-644f7b559b-gngw5_736c34b0-e2b3-4d08-be5b-53491a475d18/keystone-api/0.log" Feb 28 11:15:53 crc kubenswrapper[4996]: I0228 11:15:53.535431 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_09bc4f70-3953-4e3d-a6b0-60905a719e37/nova-cell1-conductor-conductor/0.log" Feb 28 11:15:53 crc kubenswrapper[4996]: I0228 11:15:53.612541 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d65ff4ec-036e-4680-8a41-9941e185fc14/nova-cell0-conductor-conductor/0.log" Feb 28 11:15:54 crc kubenswrapper[4996]: I0228 11:15:54.113041 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_28a8ab76-f177-47a0-8b6c-9f8c75739b30/nova-cell1-novncproxy-novncproxy/0.log" Feb 28 11:15:54 crc kubenswrapper[4996]: I0228 11:15:54.222421 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-7b8xm_1d56c0f7-03f9-4035-b2d2-ef6d77821940/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:54 crc kubenswrapper[4996]: I0228 11:15:54.561424 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3eb16fcc-5ac7-437e-bca5-e82873599fac/nova-metadata-log/0.log" Feb 28 11:15:55 crc kubenswrapper[4996]: I0228 11:15:55.590935 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_07191c7b-ef05-4fca-ab52-6df77fc1b92a/nova-api-log/0.log" Feb 28 11:15:55 crc kubenswrapper[4996]: I0228 11:15:55.621792 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_594a5261-8810-4189-9140-39d0fc645c6e/nova-scheduler-scheduler/0.log" Feb 28 11:15:55 crc kubenswrapper[4996]: I0228 11:15:55.876219 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25c48fac-9425-4af6-aa7d-6b2c2428ef2d/mysql-bootstrap/0.log" Feb 28 11:15:56 crc kubenswrapper[4996]: I0228 11:15:56.022982 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25c48fac-9425-4af6-aa7d-6b2c2428ef2d/mysql-bootstrap/0.log" Feb 28 11:15:56 crc kubenswrapper[4996]: I0228 11:15:56.128544 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_25c48fac-9425-4af6-aa7d-6b2c2428ef2d/galera/0.log" Feb 28 11:15:56 crc kubenswrapper[4996]: I0228 11:15:56.351466 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cce18b01-6974-43c9-86e2-564a4024564b/mysql-bootstrap/0.log" Feb 28 11:15:56 crc kubenswrapper[4996]: I0228 11:15:56.592327 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cce18b01-6974-43c9-86e2-564a4024564b/mysql-bootstrap/0.log" Feb 28 11:15:56 crc kubenswrapper[4996]: I0228 11:15:56.633803 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_cce18b01-6974-43c9-86e2-564a4024564b/galera/0.log" Feb 28 11:15:56 crc kubenswrapper[4996]: I0228 11:15:56.849930 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ea848d22-46ca-46ec-a5e7-5b26014b569b/openstackclient/0.log" Feb 28 11:15:56 crc kubenswrapper[4996]: I0228 11:15:56.976494 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_07191c7b-ef05-4fca-ab52-6df77fc1b92a/nova-api-api/0.log" Feb 28 11:15:57 crc kubenswrapper[4996]: I0228 11:15:57.094313 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6rm4w_ab34e1ca-2f20-4604-85fa-ca92e0a1ce68/ovn-controller/0.log" Feb 28 11:15:57 crc kubenswrapper[4996]: I0228 11:15:57.263271 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5hxhx_1edf409f-42c6-4e00-bf2e-6cd81644033a/openstack-network-exporter/0.log" Feb 28 11:15:57 crc kubenswrapper[4996]: I0228 11:15:57.431925 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lm47_664813b7-20c4-40e4-b4a8-9beacfb177fa/ovsdb-server-init/0.log" Feb 28 11:15:57 crc kubenswrapper[4996]: I0228 11:15:57.633321 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lm47_664813b7-20c4-40e4-b4a8-9beacfb177fa/ovsdb-server/0.log" Feb 28 11:15:57 crc kubenswrapper[4996]: I0228 11:15:57.641353 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lm47_664813b7-20c4-40e4-b4a8-9beacfb177fa/ovsdb-server-init/0.log" Feb 28 11:15:57 crc kubenswrapper[4996]: I0228 11:15:57.669170 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-7lm47_664813b7-20c4-40e4-b4a8-9beacfb177fa/ovs-vswitchd/0.log" Feb 28 11:15:57 crc kubenswrapper[4996]: I0228 11:15:57.868503 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-f8v97_688e7207-5681-405d-9548-9c8d753b28e1/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:58 crc kubenswrapper[4996]: I0228 11:15:58.055134 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c38b2e2f-cb15-44c6-b4d9-1b9d80c57045/openstack-network-exporter/0.log" Feb 28 11:15:58 crc kubenswrapper[4996]: I0228 11:15:58.080161 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_c38b2e2f-cb15-44c6-b4d9-1b9d80c57045/ovn-northd/0.log" Feb 28 11:15:58 crc kubenswrapper[4996]: I0228 11:15:58.232283 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0b250a70-da80-4cf5-842b-3a4897a4cbc8/openstack-network-exporter/0.log" Feb 28 11:15:58 crc kubenswrapper[4996]: I0228 11:15:58.267898 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0b250a70-da80-4cf5-842b-3a4897a4cbc8/ovsdbserver-nb/0.log" Feb 28 11:15:58 crc kubenswrapper[4996]: I0228 11:15:58.425177 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e/openstack-network-exporter/0.log" Feb 28 11:15:58 crc kubenswrapper[4996]: I0228 11:15:58.473428 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_46fbd9a6-fd40-4ab8-bad5-f4c0397fdb3e/ovsdbserver-sb/0.log" Feb 28 11:15:58 crc kubenswrapper[4996]: I0228 11:15:58.596773 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3eb16fcc-5ac7-437e-bca5-e82873599fac/nova-metadata-metadata/0.log" Feb 28 11:15:58 crc kubenswrapper[4996]: I0228 11:15:58.846921 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_77efb507-fab5-4164-8cd8-576b15f4d6f8/setup-container/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.116410 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_77efb507-fab5-4164-8cd8-576b15f4d6f8/setup-container/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.117836 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_77efb507-fab5-4164-8cd8-576b15f4d6f8/rabbitmq/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.214843 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-594c4f7c44-lnbrv_d17faf34-1a55-4544-8da0-2b15159ff1d6/placement-api/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.348398 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f61ee28f-ef2a-45ee-9832-57559af20a84/setup-container/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.391384 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-594c4f7c44-lnbrv_d17faf34-1a55-4544-8da0-2b15159ff1d6/placement-log/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.490491 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f61ee28f-ef2a-45ee-9832-57559af20a84/setup-container/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.566992 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f61ee28f-ef2a-45ee-9832-57559af20a84/rabbitmq/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.574583 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-sqh9f_b2cd442e-b51b-41cc-a664-fead95314ada/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.810258 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-mkxvf_6dbca8bf-95da-4cd5-b57e-d810e5f39ae6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:15:59 crc kubenswrapper[4996]: I0228 11:15:59.845905 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-k5hwc_edad127b-e6c2-4b27-add0-60234ee9f1cb/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.034274 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:16:00 crc kubenswrapper[4996]: E0228 11:16:00.034496 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.144390 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-npr5w_eacfea11-3471-48df-a164-22a498aa7574/ssh-known-hosts-edpm-deployment/0.log" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159110 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537956-j4bpx"] Feb 28 11:16:00 crc kubenswrapper[4996]: E0228 11:16:00.159630 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerName="extract-utilities" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159646 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerName="extract-utilities" Feb 28 11:16:00 crc kubenswrapper[4996]: E0228 11:16:00.159658 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerName="extract-content" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159665 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerName="extract-content" Feb 28 11:16:00 crc kubenswrapper[4996]: E0228 11:16:00.159681 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerName="registry-server" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159688 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerName="registry-server" Feb 28 11:16:00 crc kubenswrapper[4996]: E0228 11:16:00.159711 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerName="registry-server" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159717 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerName="registry-server" Feb 28 11:16:00 crc kubenswrapper[4996]: E0228 11:16:00.159729 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerName="extract-utilities" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159735 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerName="extract-utilities" Feb 28 11:16:00 crc kubenswrapper[4996]: E0228 11:16:00.159749 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerName="extract-content" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159755 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerName="extract-content" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159939 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="db675c69-d3ce-4b4c-a370-1fa32ec14ad0" containerName="registry-server" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.159958 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47a0f60-e8c4-44d3-b13b-dd6554aa06b7" containerName="registry-server" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.160644 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537956-j4bpx" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.164618 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.164798 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.164838 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.180059 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537956-j4bpx"] Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.216441 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_5f62e7a0-18c6-441e-8804-4760a6dd1efc/tempest-tests-tempest-tests-runner/0.log" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.226737 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkxg2\" (UniqueName: \"kubernetes.io/projected/5af459d2-cf40-434a-8052-9e5a0392c0eb-kube-api-access-pkxg2\") pod \"auto-csr-approver-29537956-j4bpx\" (UID: \"5af459d2-cf40-434a-8052-9e5a0392c0eb\") " pod="openshift-infra/auto-csr-approver-29537956-j4bpx" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.329390 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkxg2\" (UniqueName: \"kubernetes.io/projected/5af459d2-cf40-434a-8052-9e5a0392c0eb-kube-api-access-pkxg2\") pod \"auto-csr-approver-29537956-j4bpx\" (UID: \"5af459d2-cf40-434a-8052-9e5a0392c0eb\") " pod="openshift-infra/auto-csr-approver-29537956-j4bpx" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.360693 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkxg2\" (UniqueName: \"kubernetes.io/projected/5af459d2-cf40-434a-8052-9e5a0392c0eb-kube-api-access-pkxg2\") pod \"auto-csr-approver-29537956-j4bpx\" (UID: \"5af459d2-cf40-434a-8052-9e5a0392c0eb\") " pod="openshift-infra/auto-csr-approver-29537956-j4bpx" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.396958 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_af137d17-a90e-42ea-8e73-3dba0196c670/tempest-tests-tempest-tests-runner/0.log" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.408903 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-ansibletest-ansibletest-ansibletest_502af5eb-df11-47d8-b386-7c8dc19e280c/test-operator-logs-container/0.log" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.483603 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537956-j4bpx" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.605026 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-horizontest-horizontest-tests-horizontest_c025b832-dde4-4bf3-ada7-0a882c92dd0b/test-operator-logs-container/0.log" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.658315 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_f52349d5-5dab-4972-bdb2-835cb675071f/test-operator-logs-container/0.log" Feb 28 11:16:00 crc kubenswrapper[4996]: I0228 11:16:00.911186 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tobiko-tobiko-tests-tobiko_f624dd26-b398-4f25-b94e-74a5560432a8/test-operator-logs-container/0.log" Feb 28 11:16:01 crc kubenswrapper[4996]: I0228 11:16:01.004847 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537956-j4bpx"] Feb 28 11:16:01 crc kubenswrapper[4996]: I0228 11:16:01.019567 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s00-podified-functional_8ab8d5e2-9d4f-4ba7-a2e7-4c01cf277979/tobiko-tests-tobiko/0.log" Feb 28 11:16:01 crc kubenswrapper[4996]: I0228 11:16:01.147863 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tobiko-tests-tobiko-s01-sanity_e548eb85-b67c-4520-80ff-88f65e118673/tobiko-tests-tobiko/0.log" Feb 28 11:16:01 crc kubenswrapper[4996]: I0228 11:16:01.285922 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-6sltr_4b3851de-b4b3-497e-9b3d-d56d55e05792/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 11:16:01 crc kubenswrapper[4996]: I0228 11:16:01.811603 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537956-j4bpx" event={"ID":"5af459d2-cf40-434a-8052-9e5a0392c0eb","Type":"ContainerStarted","Data":"a6a8ac36cbc76932f33314b8d0d9d69704660a82d79209167527f8f663e1ac82"} Feb 28 11:16:02 crc kubenswrapper[4996]: I0228 11:16:02.820779 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537956-j4bpx" event={"ID":"5af459d2-cf40-434a-8052-9e5a0392c0eb","Type":"ContainerStarted","Data":"eec9fab826445a9b1b386bc9610be80cd00ed19eb518054b96818bd52c1f0cbd"} Feb 28 11:16:03 crc kubenswrapper[4996]: I0228 11:16:03.829729 4996 generic.go:334] "Generic (PLEG): container finished" podID="5af459d2-cf40-434a-8052-9e5a0392c0eb" containerID="eec9fab826445a9b1b386bc9610be80cd00ed19eb518054b96818bd52c1f0cbd" exitCode=0 Feb 28 11:16:03 crc kubenswrapper[4996]: I0228 11:16:03.830008 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537956-j4bpx" event={"ID":"5af459d2-cf40-434a-8052-9e5a0392c0eb","Type":"ContainerDied","Data":"eec9fab826445a9b1b386bc9610be80cd00ed19eb518054b96818bd52c1f0cbd"} Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.251856 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537956-j4bpx" Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.347610 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkxg2\" (UniqueName: \"kubernetes.io/projected/5af459d2-cf40-434a-8052-9e5a0392c0eb-kube-api-access-pkxg2\") pod \"5af459d2-cf40-434a-8052-9e5a0392c0eb\" (UID: \"5af459d2-cf40-434a-8052-9e5a0392c0eb\") " Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.372236 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af459d2-cf40-434a-8052-9e5a0392c0eb-kube-api-access-pkxg2" (OuterVolumeSpecName: "kube-api-access-pkxg2") pod "5af459d2-cf40-434a-8052-9e5a0392c0eb" (UID: "5af459d2-cf40-434a-8052-9e5a0392c0eb"). InnerVolumeSpecName "kube-api-access-pkxg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.450013 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkxg2\" (UniqueName: \"kubernetes.io/projected/5af459d2-cf40-434a-8052-9e5a0392c0eb-kube-api-access-pkxg2\") on node \"crc\" DevicePath \"\"" Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.844595 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3f427004-3205-42d2-86db-84131a0d2ab7/memcached/0.log" Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.851685 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537956-j4bpx" event={"ID":"5af459d2-cf40-434a-8052-9e5a0392c0eb","Type":"ContainerDied","Data":"a6a8ac36cbc76932f33314b8d0d9d69704660a82d79209167527f8f663e1ac82"} Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.851723 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6a8ac36cbc76932f33314b8d0d9d69704660a82d79209167527f8f663e1ac82" Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.851775 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537956-j4bpx" Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.895927 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537950-wspwd"] Feb 28 11:16:05 crc kubenswrapper[4996]: I0228 11:16:05.909806 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537950-wspwd"] Feb 28 11:16:07 crc kubenswrapper[4996]: I0228 11:16:07.044044 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f011ae66-b283-4176-a114-20d35ff4065e" path="/var/lib/kubelet/pods/f011ae66-b283-4176-a114-20d35ff4065e/volumes" Feb 28 11:16:13 crc kubenswrapper[4996]: I0228 11:16:13.033647 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:16:13 crc kubenswrapper[4996]: E0228 11:16:13.034634 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:16:25 crc kubenswrapper[4996]: E0228 11:16:25.033778 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:16:25 crc kubenswrapper[4996]: I0228 11:16:25.420636 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/util/0.log" Feb 28 11:16:25 crc kubenswrapper[4996]: I0228 11:16:25.647575 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/pull/0.log" Feb 28 11:16:25 crc kubenswrapper[4996]: I0228 11:16:25.668164 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/util/0.log" Feb 28 11:16:25 crc kubenswrapper[4996]: I0228 11:16:25.700197 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/pull/0.log" Feb 28 11:16:25 crc kubenswrapper[4996]: I0228 11:16:25.839628 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/pull/0.log" Feb 28 11:16:25 crc kubenswrapper[4996]: I0228 11:16:25.902547 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/extract/0.log" Feb 28 11:16:25 crc kubenswrapper[4996]: I0228 11:16:25.908776 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3d79630fef229c9854354c3516822e9d62481e92d2e19b6c3a1d2ddab7vn6nd_45ec8d80-174c-4265-8b8e-dfdda274e589/util/0.log" Feb 28 11:16:26 crc kubenswrapper[4996]: I0228 11:16:26.460493 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-bk2ml_996ef81c-b994-461d-a9e0-ec61f8fe65f3/manager/0.log" Feb 28 11:16:26 crc kubenswrapper[4996]: I0228 11:16:26.839851 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-2bv7j_325efd0b-ff17-4ea1-a1d5-c12576259ce5/manager/0.log" Feb 28 11:16:26 crc kubenswrapper[4996]: I0228 11:16:26.941139 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-4zmmg_9b610086-19c9-4e01-8c4e-dcf6660d749e/manager/0.log" Feb 28 11:16:27 crc kubenswrapper[4996]: I0228 11:16:27.039476 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:16:27 crc kubenswrapper[4996]: E0228 11:16:27.039966 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:16:27 crc kubenswrapper[4996]: I0228 11:16:27.207296 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-7cdhw_cb2d53e3-ca80-4c1c-8d0b-02caeb753792/manager/0.log" Feb 28 11:16:27 crc kubenswrapper[4996]: I0228 11:16:27.716870 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-6kcm4_07d97cb5-6c6a-4d30-9454-8c13b5fc9adc/manager/0.log" Feb 28 11:16:27 crc kubenswrapper[4996]: I0228 11:16:27.971387 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-x6bwt_c8adb771-c22d-4f69-90a5-61cd4a36b618/manager/0.log" Feb 28 11:16:28 crc kubenswrapper[4996]: I0228 11:16:28.285679 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-clblk_5ac2fbaf-55f0-4ac4-999c-5e07a4b141f3/manager/0.log" Feb 28 11:16:28 crc kubenswrapper[4996]: I0228 11:16:28.648300 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-pp79l_9d5331d1-e5df-4b2c-8663-5fe6afc00995/manager/0.log" Feb 28 11:16:28 crc kubenswrapper[4996]: I0228 11:16:28.823718 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-zmcjg_de3f6975-8417-4db2-9d04-5364f4127334/manager/0.log" Feb 28 11:16:29 crc kubenswrapper[4996]: I0228 11:16:29.150411 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-kx6ht_84adbefa-8503-41bf-8b9b-662b08251cff/manager/0.log" Feb 28 11:16:29 crc kubenswrapper[4996]: I0228 11:16:29.443867 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-d5qml_a4e35f97-45f5-457f-bc93-86536fcbee68/manager/0.log" Feb 28 11:16:29 crc kubenswrapper[4996]: I0228 11:16:29.477971 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-qs4sw_93f353d8-bbaa-4ec1-b816-d23d58c05ee1/manager/0.log" Feb 28 11:16:29 crc kubenswrapper[4996]: I0228 11:16:29.816682 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-tdl5w_bd32451f-7a7d-429f-906f-d98e355c1abf/manager/0.log" Feb 28 11:16:29 crc kubenswrapper[4996]: I0228 11:16:29.906387 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cdh4x4_2de35814-cd78-4178-8b32-1fbd89de94b4/manager/0.log" Feb 28 11:16:30 crc kubenswrapper[4996]: I0228 11:16:30.269496 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-646b94fdfc-pc26j_e8534bde-79ad-4654-8a2b-8fa14ee7266b/operator/0.log" Feb 28 11:16:30 crc kubenswrapper[4996]: I0228 11:16:30.299299 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6cj4k_55ef28a1-cfbb-4a48-8b2c-c3f0784976fd/registry-server/0.log" Feb 28 11:16:30 crc kubenswrapper[4996]: I0228 11:16:30.569049 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-wwzjf_8a886fa9-0abd-4197-9a18-09f20f403ef4/manager/0.log" Feb 28 11:16:30 crc kubenswrapper[4996]: I0228 11:16:30.734833 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-q59cv_3a0ccc77-1ced-4c14-a1ac-18523be0afd4/manager/0.log" Feb 28 11:16:30 crc kubenswrapper[4996]: I0228 11:16:30.839366 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-tn67k_36f7fdcf-d295-4ee0-9155-fbd3dc0d1234/operator/0.log" Feb 28 11:16:31 crc kubenswrapper[4996]: I0228 11:16:31.007206 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-c9v26_b503546b-54b6-4133-8d44-6a162ef54232/manager/0.log" Feb 28 11:16:31 crc kubenswrapper[4996]: I0228 11:16:31.265452 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-f88sn_2d7f8619-4576-4fb4-83e1-73ebe232a06d/manager/0.log" Feb 28 11:16:31 crc kubenswrapper[4996]: I0228 11:16:31.282855 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-655d95ddc7-xxt4d_e4770c19-1759-4f93-88ea-696d28d6b149/manager/0.log" Feb 28 11:16:31 crc kubenswrapper[4996]: I0228 11:16:31.482343 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-qzzwv_b1a21e4c-eb15-4914-9366-45a0bc6f2e3d/manager/0.log" Feb 28 11:16:32 crc kubenswrapper[4996]: I0228 11:16:32.178247 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65cbf4f977-dh2cm_143a07a9-b2e4-4b4b-9328-a3feee140c26/manager/0.log" Feb 28 11:16:40 crc kubenswrapper[4996]: I0228 11:16:40.536917 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-pppmx_9369ade1-1b2d-45cf-b376-1963d785be5c/manager/0.log" Feb 28 11:16:42 crc kubenswrapper[4996]: I0228 11:16:42.032605 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:16:42 crc kubenswrapper[4996]: E0228 11:16:42.033040 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:16:51 crc kubenswrapper[4996]: I0228 11:16:51.960330 4996 scope.go:117] "RemoveContainer" containerID="8388481515d58f45eacdc7c0db99b537c2705d20a6bf4cfddcdc41531a21afde" Feb 28 11:16:52 crc kubenswrapper[4996]: I0228 11:16:52.809819 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-bhdsm_3f55132b-9e49-49fb-9043-aa56c455ea0f/control-plane-machine-set-operator/0.log" Feb 28 11:16:53 crc kubenswrapper[4996]: I0228 11:16:53.002870 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xd5f6_f51a22df-16fd-4f58-85dd-af4d0fc97752/machine-api-operator/0.log" Feb 28 11:16:53 crc kubenswrapper[4996]: I0228 11:16:53.009397 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-xd5f6_f51a22df-16fd-4f58-85dd-af4d0fc97752/kube-rbac-proxy/0.log" Feb 28 11:16:55 crc kubenswrapper[4996]: I0228 11:16:55.033459 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:16:55 crc kubenswrapper[4996]: I0228 11:16:55.319789 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"806c079a5a1551c11992d62c6379f94a5a2b3cadbf51278f03d70633d462650c"} Feb 28 11:17:05 crc kubenswrapper[4996]: I0228 11:17:05.222236 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-z92ks_e5269cb9-2bff-4476-92a8-fc85304fe923/cert-manager-controller/0.log" Feb 28 11:17:05 crc kubenswrapper[4996]: I0228 11:17:05.391356 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-wl22t_7db94663-acd6-4e4c-a203-2cec2afad8da/cert-manager-webhook/0.log" Feb 28 11:17:05 crc kubenswrapper[4996]: I0228 11:17:05.391570 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-fmb5g_6305a0f6-5022-49e7-b7a3-e41862e0bfbc/cert-manager-cainjector/0.log" Feb 28 11:17:17 crc kubenswrapper[4996]: I0228 11:17:17.275023 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-8nvql_24a4ec72-da59-4afb-93a8-07f88c99753f/nmstate-console-plugin/0.log" Feb 28 11:17:17 crc kubenswrapper[4996]: I0228 11:17:17.399043 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jmt82_ce37c1b2-44ca-4001-b29b-518b02279f50/nmstate-handler/0.log" Feb 28 11:17:17 crc kubenswrapper[4996]: I0228 11:17:17.446289 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-97f4k_6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd/nmstate-metrics/0.log" Feb 28 11:17:17 crc kubenswrapper[4996]: I0228 11:17:17.447930 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-97f4k_6b7eff8e-5a88-4cbc-aec4-1bf997fe31cd/kube-rbac-proxy/0.log" Feb 28 11:17:17 crc kubenswrapper[4996]: I0228 11:17:17.683708 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-s685x_b1a6c76e-564f-4300-ab4b-001eade60a3c/nmstate-operator/0.log" Feb 28 11:17:17 crc kubenswrapper[4996]: I0228 11:17:17.686845 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-tbjkf_a536c0f7-7da6-4af1-91a2-78ef301ca956/nmstate-webhook/0.log" Feb 28 11:17:30 crc kubenswrapper[4996]: E0228 11:17:30.033128 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:17:42 crc kubenswrapper[4996]: I0228 11:17:42.716552 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-ndmxj_2c5f5c9c-0220-40e1-9180-424aa6b0b104/kube-rbac-proxy/0.log" Feb 28 11:17:42 crc kubenswrapper[4996]: I0228 11:17:42.889821 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-ndmxj_2c5f5c9c-0220-40e1-9180-424aa6b0b104/controller/0.log" Feb 28 11:17:42 crc kubenswrapper[4996]: I0228 11:17:42.961383 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-frr-files/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.152106 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-reloader/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.166196 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-frr-files/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.195937 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-reloader/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.214995 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-metrics/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.374515 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-frr-files/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.401737 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-reloader/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.408813 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-metrics/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.454819 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-metrics/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.612964 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-frr-files/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.631613 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-metrics/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.642300 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/controller/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.653854 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/cp-reloader/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.780305 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/frr-metrics/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.805856 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/kube-rbac-proxy-frr/0.log" Feb 28 11:17:43 crc kubenswrapper[4996]: I0228 11:17:43.859620 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/kube-rbac-proxy/0.log" Feb 28 11:17:44 crc kubenswrapper[4996]: I0228 11:17:44.077924 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-9dp7d_1e4f759a-a03c-45a7-b736-776f1556c2f5/frr-k8s-webhook-server/0.log" Feb 28 11:17:44 crc kubenswrapper[4996]: I0228 11:17:44.101101 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/reloader/0.log" Feb 28 11:17:44 crc kubenswrapper[4996]: I0228 11:17:44.399111 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b55d58fc7-vm879_0e8f07d7-a80e-4587-979f-26d28ce2bf2f/manager/0.log" Feb 28 11:17:44 crc kubenswrapper[4996]: I0228 11:17:44.470395 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6f44cf5f86-2slf9_5146aecb-1f48-48a2-ae75-5289e11c2c06/webhook-server/0.log" Feb 28 11:17:44 crc kubenswrapper[4996]: I0228 11:17:44.611169 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w5v6p_3f229baa-c709-4168-b123-25ee77a6f4c0/kube-rbac-proxy/0.log" Feb 28 11:17:45 crc kubenswrapper[4996]: I0228 11:17:45.124328 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-w5v6p_3f229baa-c709-4168-b123-25ee77a6f4c0/speaker/0.log" Feb 28 11:17:46 crc kubenswrapper[4996]: I0228 11:17:46.375650 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7kw64_e645411e-43c5-44dd-b06a-4340e026ef8f/frr/0.log" Feb 28 11:17:58 crc kubenswrapper[4996]: I0228 11:17:58.370854 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/util/0.log" Feb 28 11:17:58 crc kubenswrapper[4996]: I0228 11:17:58.549069 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/util/0.log" Feb 28 11:17:58 crc kubenswrapper[4996]: I0228 11:17:58.575319 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/pull/0.log" Feb 28 11:17:58 crc kubenswrapper[4996]: I0228 11:17:58.667712 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/pull/0.log" Feb 28 11:17:58 crc kubenswrapper[4996]: I0228 11:17:58.868688 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/util/0.log" Feb 28 11:17:58 crc kubenswrapper[4996]: I0228 11:17:58.995299 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/extract/0.log" Feb 28 11:17:59 crc kubenswrapper[4996]: I0228 11:17:59.009716 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82w9zrm_1f3f1b93-b4b8-4171-893a-284b4fc07448/pull/0.log" Feb 28 11:17:59 crc kubenswrapper[4996]: I0228 11:17:59.145821 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-utilities/0.log" Feb 28 11:17:59 crc kubenswrapper[4996]: I0228 11:17:59.335364 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-content/0.log" Feb 28 11:17:59 crc kubenswrapper[4996]: I0228 11:17:59.361133 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-utilities/0.log" Feb 28 11:17:59 crc kubenswrapper[4996]: I0228 11:17:59.361718 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-content/0.log" Feb 28 11:17:59 crc kubenswrapper[4996]: I0228 11:17:59.577134 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-content/0.log" Feb 28 11:17:59 crc kubenswrapper[4996]: I0228 11:17:59.608416 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/extract-utilities/0.log" Feb 28 11:17:59 crc kubenswrapper[4996]: I0228 11:17:59.849878 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-utilities/0.log" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.033254 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-content/0.log" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.078751 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-content/0.log" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.132181 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-utilities/0.log" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.144637 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537958-9n47s"] Feb 28 11:18:00 crc kubenswrapper[4996]: E0228 11:18:00.145111 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af459d2-cf40-434a-8052-9e5a0392c0eb" containerName="oc" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.145129 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af459d2-cf40-434a-8052-9e5a0392c0eb" containerName="oc" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.145346 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af459d2-cf40-434a-8052-9e5a0392c0eb" containerName="oc" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.145911 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537958-9n47s" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.150514 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.150774 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.151046 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.186614 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537958-9n47s"] Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.308404 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k68f2\" (UniqueName: \"kubernetes.io/projected/ffb0cbbd-b638-4d2a-975d-ed3dddb3b031-kube-api-access-k68f2\") pod \"auto-csr-approver-29537958-9n47s\" (UID: \"ffb0cbbd-b638-4d2a-975d-ed3dddb3b031\") " pod="openshift-infra/auto-csr-approver-29537958-9n47s" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.353905 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-utilities/0.log" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.382880 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/extract-content/0.log" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.415442 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k68f2\" (UniqueName: \"kubernetes.io/projected/ffb0cbbd-b638-4d2a-975d-ed3dddb3b031-kube-api-access-k68f2\") pod \"auto-csr-approver-29537958-9n47s\" (UID: \"ffb0cbbd-b638-4d2a-975d-ed3dddb3b031\") " pod="openshift-infra/auto-csr-approver-29537958-9n47s" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.436210 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k68f2\" (UniqueName: \"kubernetes.io/projected/ffb0cbbd-b638-4d2a-975d-ed3dddb3b031-kube-api-access-k68f2\") pod \"auto-csr-approver-29537958-9n47s\" (UID: \"ffb0cbbd-b638-4d2a-975d-ed3dddb3b031\") " pod="openshift-infra/auto-csr-approver-29537958-9n47s" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.521527 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537958-9n47s" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.594813 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/util/0.log" Feb 28 11:18:00 crc kubenswrapper[4996]: I0228 11:18:00.823701 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-c7gl7_7bffe615-107e-43bb-a1c6-abcb3684ecc5/registry-server/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.136415 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537958-9n47s"] Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.154730 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/pull/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.196538 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/pull/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.229335 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/util/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.490239 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/util/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.519475 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/extract/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.554709 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4tvsct_45a76a31-3be4-4492-93bc-3ceb560d1743/pull/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.803607 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vtmgq_cee24e37-cdd8-4423-831e-8c13e1f30c37/marketplace-operator/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.808064 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lw467_aa038159-c228-4d5d-bf86-18fa4e8c489d/registry-server/0.log" Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.899576 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537958-9n47s" event={"ID":"ffb0cbbd-b638-4d2a-975d-ed3dddb3b031","Type":"ContainerStarted","Data":"74724101bed08ba8ad3553e93b48b7dad19f295f4cc4c8654d45d6ddea79422a"} Feb 28 11:18:01 crc kubenswrapper[4996]: I0228 11:18:01.947454 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-utilities/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.185574 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-utilities/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.263902 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-content/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.272544 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-content/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.456573 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-utilities/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.539117 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/extract-content/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.746450 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-utilities/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.783429 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-b4wzn_6fb55139-32ca-412a-b14e-5a026e75bd03/registry-server/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.894941 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-utilities/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.910674 4996 generic.go:334] "Generic (PLEG): container finished" podID="ffb0cbbd-b638-4d2a-975d-ed3dddb3b031" containerID="8e3c0ee439fde89dac53945d1fd7d9dd66ae7aeafdd9850a0fa94607731b36eb" exitCode=0 Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.910724 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537958-9n47s" event={"ID":"ffb0cbbd-b638-4d2a-975d-ed3dddb3b031","Type":"ContainerDied","Data":"8e3c0ee439fde89dac53945d1fd7d9dd66ae7aeafdd9850a0fa94607731b36eb"} Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.920151 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-content/0.log" Feb 28 11:18:02 crc kubenswrapper[4996]: I0228 11:18:02.970740 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-content/0.log" Feb 28 11:18:03 crc kubenswrapper[4996]: I0228 11:18:03.106047 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-utilities/0.log" Feb 28 11:18:03 crc kubenswrapper[4996]: I0228 11:18:03.130157 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/extract-content/0.log" Feb 28 11:18:03 crc kubenswrapper[4996]: I0228 11:18:03.965808 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hmjwf_75473fc1-d880-4706-b4ff-9c95431be795/registry-server/0.log" Feb 28 11:18:04 crc kubenswrapper[4996]: I0228 11:18:04.278774 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537958-9n47s" Feb 28 11:18:04 crc kubenswrapper[4996]: I0228 11:18:04.404331 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k68f2\" (UniqueName: \"kubernetes.io/projected/ffb0cbbd-b638-4d2a-975d-ed3dddb3b031-kube-api-access-k68f2\") pod \"ffb0cbbd-b638-4d2a-975d-ed3dddb3b031\" (UID: \"ffb0cbbd-b638-4d2a-975d-ed3dddb3b031\") " Feb 28 11:18:04 crc kubenswrapper[4996]: I0228 11:18:04.410987 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb0cbbd-b638-4d2a-975d-ed3dddb3b031-kube-api-access-k68f2" (OuterVolumeSpecName: "kube-api-access-k68f2") pod "ffb0cbbd-b638-4d2a-975d-ed3dddb3b031" (UID: "ffb0cbbd-b638-4d2a-975d-ed3dddb3b031"). InnerVolumeSpecName "kube-api-access-k68f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:18:04 crc kubenswrapper[4996]: I0228 11:18:04.506532 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k68f2\" (UniqueName: \"kubernetes.io/projected/ffb0cbbd-b638-4d2a-975d-ed3dddb3b031-kube-api-access-k68f2\") on node \"crc\" DevicePath \"\"" Feb 28 11:18:04 crc kubenswrapper[4996]: I0228 11:18:04.939072 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537958-9n47s" event={"ID":"ffb0cbbd-b638-4d2a-975d-ed3dddb3b031","Type":"ContainerDied","Data":"74724101bed08ba8ad3553e93b48b7dad19f295f4cc4c8654d45d6ddea79422a"} Feb 28 11:18:04 crc kubenswrapper[4996]: I0228 11:18:04.939111 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74724101bed08ba8ad3553e93b48b7dad19f295f4cc4c8654d45d6ddea79422a" Feb 28 11:18:04 crc kubenswrapper[4996]: I0228 11:18:04.939135 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537958-9n47s" Feb 28 11:18:05 crc kubenswrapper[4996]: I0228 11:18:05.341527 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537952-whtlq"] Feb 28 11:18:05 crc kubenswrapper[4996]: I0228 11:18:05.350086 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537952-whtlq"] Feb 28 11:18:07 crc kubenswrapper[4996]: I0228 11:18:07.044606 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0929b45f-bb4f-4cb4-b3de-a7e3406f7d44" path="/var/lib/kubelet/pods/0929b45f-bb4f-4cb4-b3de-a7e3406f7d44/volumes" Feb 28 11:18:27 crc kubenswrapper[4996]: E0228 11:18:27.231793 4996 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.9:40774->38.102.83.9:39449: write tcp 38.102.83.9:40774->38.102.83.9:39449: write: broken pipe Feb 28 11:18:35 crc kubenswrapper[4996]: E0228 11:18:35.725605 4996 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.9:53568->38.102.83.9:39449: write tcp 38.102.83.9:53568->38.102.83.9:39449: write: broken pipe Feb 28 11:18:49 crc kubenswrapper[4996]: E0228 11:18:49.033118 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:18:52 crc kubenswrapper[4996]: I0228 11:18:52.097594 4996 scope.go:117] "RemoveContainer" containerID="cb2cfe77cb549cd388df41ecc3d4aa41d01fd7f62de02c5f98f91f8009c89d61" Feb 28 11:19:12 crc kubenswrapper[4996]: I0228 11:19:12.248621 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:19:12 crc kubenswrapper[4996]: I0228 11:19:12.249170 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:19:42 crc kubenswrapper[4996]: I0228 11:19:42.248812 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:19:42 crc kubenswrapper[4996]: I0228 11:19:42.249388 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:19:52 crc kubenswrapper[4996]: E0228 11:19:52.033298 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.146155 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537960-mq9t9"] Feb 28 11:20:00 crc kubenswrapper[4996]: E0228 11:20:00.147059 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb0cbbd-b638-4d2a-975d-ed3dddb3b031" containerName="oc" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.147074 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb0cbbd-b638-4d2a-975d-ed3dddb3b031" containerName="oc" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.147318 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb0cbbd-b638-4d2a-975d-ed3dddb3b031" containerName="oc" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.147911 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537960-mq9t9" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.149623 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.149903 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.150173 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.170149 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537960-mq9t9"] Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.185147 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8b99\" (UniqueName: \"kubernetes.io/projected/c2de6eb6-4e0c-446a-8d38-11b40da6076a-kube-api-access-n8b99\") pod \"auto-csr-approver-29537960-mq9t9\" (UID: \"c2de6eb6-4e0c-446a-8d38-11b40da6076a\") " pod="openshift-infra/auto-csr-approver-29537960-mq9t9" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.286919 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8b99\" (UniqueName: \"kubernetes.io/projected/c2de6eb6-4e0c-446a-8d38-11b40da6076a-kube-api-access-n8b99\") pod \"auto-csr-approver-29537960-mq9t9\" (UID: \"c2de6eb6-4e0c-446a-8d38-11b40da6076a\") " pod="openshift-infra/auto-csr-approver-29537960-mq9t9" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.308265 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8b99\" (UniqueName: \"kubernetes.io/projected/c2de6eb6-4e0c-446a-8d38-11b40da6076a-kube-api-access-n8b99\") pod \"auto-csr-approver-29537960-mq9t9\" (UID: \"c2de6eb6-4e0c-446a-8d38-11b40da6076a\") " pod="openshift-infra/auto-csr-approver-29537960-mq9t9" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.465658 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537960-mq9t9" Feb 28 11:20:00 crc kubenswrapper[4996]: I0228 11:20:00.889951 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537960-mq9t9"] Feb 28 11:20:01 crc kubenswrapper[4996]: I0228 11:20:01.055204 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537960-mq9t9" event={"ID":"c2de6eb6-4e0c-446a-8d38-11b40da6076a","Type":"ContainerStarted","Data":"4fe98d35cb73c2441a523b7139078d5f3d041916cd6735f6b999909f57fa8552"} Feb 28 11:20:03 crc kubenswrapper[4996]: I0228 11:20:03.077107 4996 generic.go:334] "Generic (PLEG): container finished" podID="c2de6eb6-4e0c-446a-8d38-11b40da6076a" containerID="3b21babe48c22ab444b34f779241c6cb65f8a297e77c4faa523548b2c39daef7" exitCode=0 Feb 28 11:20:03 crc kubenswrapper[4996]: I0228 11:20:03.077833 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537960-mq9t9" event={"ID":"c2de6eb6-4e0c-446a-8d38-11b40da6076a","Type":"ContainerDied","Data":"3b21babe48c22ab444b34f779241c6cb65f8a297e77c4faa523548b2c39daef7"} Feb 28 11:20:04 crc kubenswrapper[4996]: I0228 11:20:04.492267 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537960-mq9t9" Feb 28 11:20:04 crc kubenswrapper[4996]: I0228 11:20:04.587029 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8b99\" (UniqueName: \"kubernetes.io/projected/c2de6eb6-4e0c-446a-8d38-11b40da6076a-kube-api-access-n8b99\") pod \"c2de6eb6-4e0c-446a-8d38-11b40da6076a\" (UID: \"c2de6eb6-4e0c-446a-8d38-11b40da6076a\") " Feb 28 11:20:04 crc kubenswrapper[4996]: I0228 11:20:04.598424 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2de6eb6-4e0c-446a-8d38-11b40da6076a-kube-api-access-n8b99" (OuterVolumeSpecName: "kube-api-access-n8b99") pod "c2de6eb6-4e0c-446a-8d38-11b40da6076a" (UID: "c2de6eb6-4e0c-446a-8d38-11b40da6076a"). InnerVolumeSpecName "kube-api-access-n8b99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:20:04 crc kubenswrapper[4996]: I0228 11:20:04.689640 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8b99\" (UniqueName: \"kubernetes.io/projected/c2de6eb6-4e0c-446a-8d38-11b40da6076a-kube-api-access-n8b99\") on node \"crc\" DevicePath \"\"" Feb 28 11:20:05 crc kubenswrapper[4996]: I0228 11:20:05.098248 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537960-mq9t9" event={"ID":"c2de6eb6-4e0c-446a-8d38-11b40da6076a","Type":"ContainerDied","Data":"4fe98d35cb73c2441a523b7139078d5f3d041916cd6735f6b999909f57fa8552"} Feb 28 11:20:05 crc kubenswrapper[4996]: I0228 11:20:05.098315 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fe98d35cb73c2441a523b7139078d5f3d041916cd6735f6b999909f57fa8552" Feb 28 11:20:05 crc kubenswrapper[4996]: I0228 11:20:05.098346 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537960-mq9t9" Feb 28 11:20:05 crc kubenswrapper[4996]: I0228 11:20:05.576972 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537954-2j8mh"] Feb 28 11:20:05 crc kubenswrapper[4996]: I0228 11:20:05.584999 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537954-2j8mh"] Feb 28 11:20:07 crc kubenswrapper[4996]: I0228 11:20:07.048292 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf9c214-b1a6-4fbc-91b8-c77ceb911198" path="/var/lib/kubelet/pods/edf9c214-b1a6-4fbc-91b8-c77ceb911198/volumes" Feb 28 11:20:12 crc kubenswrapper[4996]: I0228 11:20:12.248427 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:20:12 crc kubenswrapper[4996]: I0228 11:20:12.248954 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:20:12 crc kubenswrapper[4996]: I0228 11:20:12.249051 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 11:20:12 crc kubenswrapper[4996]: I0228 11:20:12.250070 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"806c079a5a1551c11992d62c6379f94a5a2b3cadbf51278f03d70633d462650c"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 11:20:12 crc kubenswrapper[4996]: I0228 11:20:12.250136 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://806c079a5a1551c11992d62c6379f94a5a2b3cadbf51278f03d70633d462650c" gracePeriod=600 Feb 28 11:20:13 crc kubenswrapper[4996]: I0228 11:20:13.164438 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="806c079a5a1551c11992d62c6379f94a5a2b3cadbf51278f03d70633d462650c" exitCode=0 Feb 28 11:20:13 crc kubenswrapper[4996]: I0228 11:20:13.164473 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"806c079a5a1551c11992d62c6379f94a5a2b3cadbf51278f03d70633d462650c"} Feb 28 11:20:13 crc kubenswrapper[4996]: I0228 11:20:13.164901 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerStarted","Data":"bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae"} Feb 28 11:20:13 crc kubenswrapper[4996]: I0228 11:20:13.164923 4996 scope.go:117] "RemoveContainer" containerID="3b536960ed76e088df598b334c1d6b994404715c6b90b31618835fa50e8be8fa" Feb 28 11:20:25 crc kubenswrapper[4996]: I0228 11:20:25.295296 4996 generic.go:334] "Generic (PLEG): container finished" podID="b4414458-598b-4b79-b4fd-2b70f867c529" containerID="0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b" exitCode=0 Feb 28 11:20:25 crc kubenswrapper[4996]: I0228 11:20:25.295562 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mlcgh/must-gather-h59gw" event={"ID":"b4414458-598b-4b79-b4fd-2b70f867c529","Type":"ContainerDied","Data":"0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b"} Feb 28 11:20:25 crc kubenswrapper[4996]: I0228 11:20:25.296849 4996 scope.go:117] "RemoveContainer" containerID="0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b" Feb 28 11:20:25 crc kubenswrapper[4996]: I0228 11:20:25.633953 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mlcgh_must-gather-h59gw_b4414458-598b-4b79-b4fd-2b70f867c529/gather/0.log" Feb 28 11:20:38 crc kubenswrapper[4996]: I0228 11:20:38.842946 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mlcgh/must-gather-h59gw"] Feb 28 11:20:38 crc kubenswrapper[4996]: I0228 11:20:38.843810 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mlcgh/must-gather-h59gw" podUID="b4414458-598b-4b79-b4fd-2b70f867c529" containerName="copy" containerID="cri-o://200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d" gracePeriod=2 Feb 28 11:20:38 crc kubenswrapper[4996]: I0228 11:20:38.854540 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mlcgh/must-gather-h59gw"] Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.376713 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mlcgh_must-gather-h59gw_b4414458-598b-4b79-b4fd-2b70f867c529/copy/0.log" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.377910 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.429448 4996 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mlcgh_must-gather-h59gw_b4414458-598b-4b79-b4fd-2b70f867c529/copy/0.log" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.429829 4996 generic.go:334] "Generic (PLEG): container finished" podID="b4414458-598b-4b79-b4fd-2b70f867c529" containerID="200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d" exitCode=143 Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.429875 4996 scope.go:117] "RemoveContainer" containerID="200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.429995 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mlcgh/must-gather-h59gw" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.456902 4996 scope.go:117] "RemoveContainer" containerID="0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.532167 4996 scope.go:117] "RemoveContainer" containerID="200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d" Feb 28 11:20:39 crc kubenswrapper[4996]: E0228 11:20:39.532623 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d\": container with ID starting with 200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d not found: ID does not exist" containerID="200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.532663 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d"} err="failed to get container status \"200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d\": rpc error: code = NotFound desc = could not find container \"200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d\": container with ID starting with 200961702ecab0c7dcca33f3f4035b5e3c95d54441f47f1cacdead4311579e7d not found: ID does not exist" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.532695 4996 scope.go:117] "RemoveContainer" containerID="0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b" Feb 28 11:20:39 crc kubenswrapper[4996]: E0228 11:20:39.533190 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b\": container with ID starting with 0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b not found: ID does not exist" containerID="0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.533218 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b"} err="failed to get container status \"0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b\": rpc error: code = NotFound desc = could not find container \"0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b\": container with ID starting with 0f4d79a50307fbcd7ef4892785b6f613979804a028203c42777a02407258a83b not found: ID does not exist" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.548039 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b4414458-598b-4b79-b4fd-2b70f867c529-must-gather-output\") pod \"b4414458-598b-4b79-b4fd-2b70f867c529\" (UID: \"b4414458-598b-4b79-b4fd-2b70f867c529\") " Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.548164 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qvvf\" (UniqueName: \"kubernetes.io/projected/b4414458-598b-4b79-b4fd-2b70f867c529-kube-api-access-5qvvf\") pod \"b4414458-598b-4b79-b4fd-2b70f867c529\" (UID: \"b4414458-598b-4b79-b4fd-2b70f867c529\") " Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.554402 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4414458-598b-4b79-b4fd-2b70f867c529-kube-api-access-5qvvf" (OuterVolumeSpecName: "kube-api-access-5qvvf") pod "b4414458-598b-4b79-b4fd-2b70f867c529" (UID: "b4414458-598b-4b79-b4fd-2b70f867c529"). InnerVolumeSpecName "kube-api-access-5qvvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.650191 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qvvf\" (UniqueName: \"kubernetes.io/projected/b4414458-598b-4b79-b4fd-2b70f867c529-kube-api-access-5qvvf\") on node \"crc\" DevicePath \"\"" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.725900 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4414458-598b-4b79-b4fd-2b70f867c529-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b4414458-598b-4b79-b4fd-2b70f867c529" (UID: "b4414458-598b-4b79-b4fd-2b70f867c529"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:20:39 crc kubenswrapper[4996]: I0228 11:20:39.751907 4996 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b4414458-598b-4b79-b4fd-2b70f867c529-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 28 11:20:41 crc kubenswrapper[4996]: I0228 11:20:41.047057 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4414458-598b-4b79-b4fd-2b70f867c529" path="/var/lib/kubelet/pods/b4414458-598b-4b79-b4fd-2b70f867c529/volumes" Feb 28 11:20:52 crc kubenswrapper[4996]: I0228 11:20:52.214720 4996 scope.go:117] "RemoveContainer" containerID="44664b5be46a2951b286597fd0c9fb5632ca1d287fd7857a12447124f58bd86c" Feb 28 11:20:52 crc kubenswrapper[4996]: I0228 11:20:52.278937 4996 scope.go:117] "RemoveContainer" containerID="974e3793e5070c6952a1f07899162fcaf1f06cc3918667236d94bb3cc21ea53e" Feb 28 11:20:52 crc kubenswrapper[4996]: I0228 11:20:52.306712 4996 scope.go:117] "RemoveContainer" containerID="c790e6036ced609a1ebf1c65fc897079e1b08806dc868cc093e98ed9edbc7f6e" Feb 28 11:20:55 crc kubenswrapper[4996]: E0228 11:20:55.033828 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.144104 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537962-5nz8l"] Feb 28 11:22:00 crc kubenswrapper[4996]: E0228 11:22:00.144977 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2de6eb6-4e0c-446a-8d38-11b40da6076a" containerName="oc" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.144991 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2de6eb6-4e0c-446a-8d38-11b40da6076a" containerName="oc" Feb 28 11:22:00 crc kubenswrapper[4996]: E0228 11:22:00.145033 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4414458-598b-4b79-b4fd-2b70f867c529" containerName="copy" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.145040 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4414458-598b-4b79-b4fd-2b70f867c529" containerName="copy" Feb 28 11:22:00 crc kubenswrapper[4996]: E0228 11:22:00.145065 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4414458-598b-4b79-b4fd-2b70f867c529" containerName="gather" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.145072 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4414458-598b-4b79-b4fd-2b70f867c529" containerName="gather" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.145270 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4414458-598b-4b79-b4fd-2b70f867c529" containerName="gather" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.145286 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2de6eb6-4e0c-446a-8d38-11b40da6076a" containerName="oc" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.145300 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4414458-598b-4b79-b4fd-2b70f867c529" containerName="copy" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.145961 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537962-5nz8l" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.149207 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.149625 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.150098 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.154462 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537962-5nz8l"] Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.300339 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thm6c\" (UniqueName: \"kubernetes.io/projected/c66bf4c8-7b5c-476d-8372-0ab5239d0d4a-kube-api-access-thm6c\") pod \"auto-csr-approver-29537962-5nz8l\" (UID: \"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a\") " pod="openshift-infra/auto-csr-approver-29537962-5nz8l" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.402622 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thm6c\" (UniqueName: \"kubernetes.io/projected/c66bf4c8-7b5c-476d-8372-0ab5239d0d4a-kube-api-access-thm6c\") pod \"auto-csr-approver-29537962-5nz8l\" (UID: \"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a\") " pod="openshift-infra/auto-csr-approver-29537962-5nz8l" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.420811 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thm6c\" (UniqueName: \"kubernetes.io/projected/c66bf4c8-7b5c-476d-8372-0ab5239d0d4a-kube-api-access-thm6c\") pod \"auto-csr-approver-29537962-5nz8l\" (UID: \"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a\") " pod="openshift-infra/auto-csr-approver-29537962-5nz8l" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.519842 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537962-5nz8l" Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.959508 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537962-5nz8l"] Feb 28 11:22:00 crc kubenswrapper[4996]: I0228 11:22:00.962080 4996 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 11:22:01 crc kubenswrapper[4996]: I0228 11:22:01.259741 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537962-5nz8l" event={"ID":"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a","Type":"ContainerStarted","Data":"eac58dd2c80fb46738f30392de96627dc15b046546203d2d715ebbb07bb9d103"} Feb 28 11:22:02 crc kubenswrapper[4996]: I0228 11:22:02.282756 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537962-5nz8l" event={"ID":"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a","Type":"ContainerStarted","Data":"2deea58db4dc3eec7ea576b2919b1e8f8ccbb9d58c58f05f95d94679b623bf47"} Feb 28 11:22:02 crc kubenswrapper[4996]: E0228 11:22:02.373789 4996 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc66bf4c8_7b5c_476d_8372_0ab5239d0d4a.slice/crio-conmon-2deea58db4dc3eec7ea576b2919b1e8f8ccbb9d58c58f05f95d94679b623bf47.scope\": RecentStats: unable to find data in memory cache]" Feb 28 11:22:03 crc kubenswrapper[4996]: I0228 11:22:03.298997 4996 generic.go:334] "Generic (PLEG): container finished" podID="c66bf4c8-7b5c-476d-8372-0ab5239d0d4a" containerID="2deea58db4dc3eec7ea576b2919b1e8f8ccbb9d58c58f05f95d94679b623bf47" exitCode=0 Feb 28 11:22:03 crc kubenswrapper[4996]: I0228 11:22:03.299467 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537962-5nz8l" event={"ID":"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a","Type":"ContainerDied","Data":"2deea58db4dc3eec7ea576b2919b1e8f8ccbb9d58c58f05f95d94679b623bf47"} Feb 28 11:22:03 crc kubenswrapper[4996]: I0228 11:22:03.679457 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537962-5nz8l" Feb 28 11:22:03 crc kubenswrapper[4996]: I0228 11:22:03.868296 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thm6c\" (UniqueName: \"kubernetes.io/projected/c66bf4c8-7b5c-476d-8372-0ab5239d0d4a-kube-api-access-thm6c\") pod \"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a\" (UID: \"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a\") " Feb 28 11:22:03 crc kubenswrapper[4996]: I0228 11:22:03.874331 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66bf4c8-7b5c-476d-8372-0ab5239d0d4a-kube-api-access-thm6c" (OuterVolumeSpecName: "kube-api-access-thm6c") pod "c66bf4c8-7b5c-476d-8372-0ab5239d0d4a" (UID: "c66bf4c8-7b5c-476d-8372-0ab5239d0d4a"). InnerVolumeSpecName "kube-api-access-thm6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:22:03 crc kubenswrapper[4996]: I0228 11:22:03.974364 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thm6c\" (UniqueName: \"kubernetes.io/projected/c66bf4c8-7b5c-476d-8372-0ab5239d0d4a-kube-api-access-thm6c\") on node \"crc\" DevicePath \"\"" Feb 28 11:22:04 crc kubenswrapper[4996]: I0228 11:22:04.308753 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537962-5nz8l" event={"ID":"c66bf4c8-7b5c-476d-8372-0ab5239d0d4a","Type":"ContainerDied","Data":"eac58dd2c80fb46738f30392de96627dc15b046546203d2d715ebbb07bb9d103"} Feb 28 11:22:04 crc kubenswrapper[4996]: I0228 11:22:04.309723 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac58dd2c80fb46738f30392de96627dc15b046546203d2d715ebbb07bb9d103" Feb 28 11:22:04 crc kubenswrapper[4996]: I0228 11:22:04.308792 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537962-5nz8l" Feb 28 11:22:04 crc kubenswrapper[4996]: I0228 11:22:04.753953 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537956-j4bpx"] Feb 28 11:22:04 crc kubenswrapper[4996]: I0228 11:22:04.762502 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537956-j4bpx"] Feb 28 11:22:05 crc kubenswrapper[4996]: I0228 11:22:05.045641 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af459d2-cf40-434a-8052-9e5a0392c0eb" path="/var/lib/kubelet/pods/5af459d2-cf40-434a-8052-9e5a0392c0eb/volumes" Feb 28 11:22:12 crc kubenswrapper[4996]: I0228 11:22:12.248449 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:22:12 crc kubenswrapper[4996]: I0228 11:22:12.249211 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:22:20 crc kubenswrapper[4996]: E0228 11:22:20.033125 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:22:42 crc kubenswrapper[4996]: I0228 11:22:42.248445 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:22:42 crc kubenswrapper[4996]: I0228 11:22:42.249031 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.338051 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ldz7l"] Feb 28 11:22:48 crc kubenswrapper[4996]: E0228 11:22:48.338954 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c66bf4c8-7b5c-476d-8372-0ab5239d0d4a" containerName="oc" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.338969 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66bf4c8-7b5c-476d-8372-0ab5239d0d4a" containerName="oc" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.339560 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="c66bf4c8-7b5c-476d-8372-0ab5239d0d4a" containerName="oc" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.341146 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.358980 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldz7l"] Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.465320 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hchcx\" (UniqueName: \"kubernetes.io/projected/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-kube-api-access-hchcx\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.465875 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-catalog-content\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.466042 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-utilities\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.568359 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-catalog-content\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.568501 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-utilities\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.568602 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hchcx\" (UniqueName: \"kubernetes.io/projected/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-kube-api-access-hchcx\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.569333 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-catalog-content\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.569441 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-utilities\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.592896 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hchcx\" (UniqueName: \"kubernetes.io/projected/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-kube-api-access-hchcx\") pod \"certified-operators-ldz7l\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:48 crc kubenswrapper[4996]: I0228 11:22:48.670379 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:49 crc kubenswrapper[4996]: I0228 11:22:49.084087 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ldz7l"] Feb 28 11:22:49 crc kubenswrapper[4996]: I0228 11:22:49.794884 4996 generic.go:334] "Generic (PLEG): container finished" podID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerID="90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188" exitCode=0 Feb 28 11:22:49 crc kubenswrapper[4996]: I0228 11:22:49.795209 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldz7l" event={"ID":"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709","Type":"ContainerDied","Data":"90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188"} Feb 28 11:22:49 crc kubenswrapper[4996]: I0228 11:22:49.795241 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldz7l" event={"ID":"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709","Type":"ContainerStarted","Data":"57966e283ae6a358cf41701746267f6aaad7ce7c78f1318895837e1758fc671f"} Feb 28 11:22:52 crc kubenswrapper[4996]: I0228 11:22:52.453925 4996 scope.go:117] "RemoveContainer" containerID="eec9fab826445a9b1b386bc9610be80cd00ed19eb518054b96818bd52c1f0cbd" Feb 28 11:22:52 crc kubenswrapper[4996]: I0228 11:22:52.834606 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldz7l" event={"ID":"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709","Type":"ContainerDied","Data":"5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab"} Feb 28 11:22:52 crc kubenswrapper[4996]: I0228 11:22:52.834481 4996 generic.go:334] "Generic (PLEG): container finished" podID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerID="5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab" exitCode=0 Feb 28 11:22:53 crc kubenswrapper[4996]: I0228 11:22:53.844076 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldz7l" event={"ID":"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709","Type":"ContainerStarted","Data":"88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7"} Feb 28 11:22:53 crc kubenswrapper[4996]: I0228 11:22:53.861845 4996 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ldz7l" podStartSLOduration=2.4401545049999998 podStartE2EDuration="5.86182474s" podCreationTimestamp="2026-02-28 11:22:48 +0000 UTC" firstStartedPulling="2026-02-28 11:22:49.798186597 +0000 UTC m=+8533.488989418" lastFinishedPulling="2026-02-28 11:22:53.219856822 +0000 UTC m=+8536.910659653" observedRunningTime="2026-02-28 11:22:53.858383766 +0000 UTC m=+8537.549186577" watchObservedRunningTime="2026-02-28 11:22:53.86182474 +0000 UTC m=+8537.552627551" Feb 28 11:22:58 crc kubenswrapper[4996]: I0228 11:22:58.671416 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:58 crc kubenswrapper[4996]: I0228 11:22:58.671998 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:58 crc kubenswrapper[4996]: I0228 11:22:58.736687 4996 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:58 crc kubenswrapper[4996]: I0228 11:22:58.950142 4996 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:22:59 crc kubenswrapper[4996]: I0228 11:22:59.014600 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ldz7l"] Feb 28 11:23:00 crc kubenswrapper[4996]: I0228 11:23:00.907372 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ldz7l" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerName="registry-server" containerID="cri-o://88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7" gracePeriod=2 Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.336684 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.449670 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-catalog-content\") pod \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.449895 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-utilities\") pod \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.450080 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hchcx\" (UniqueName: \"kubernetes.io/projected/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-kube-api-access-hchcx\") pod \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\" (UID: \"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709\") " Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.450960 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-utilities" (OuterVolumeSpecName: "utilities") pod "3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" (UID: "3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.455523 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-kube-api-access-hchcx" (OuterVolumeSpecName: "kube-api-access-hchcx") pod "3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" (UID: "3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709"). InnerVolumeSpecName "kube-api-access-hchcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.516536 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" (UID: "3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.552666 4996 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.552722 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hchcx\" (UniqueName: \"kubernetes.io/projected/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-kube-api-access-hchcx\") on node \"crc\" DevicePath \"\"" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.552739 4996 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.920391 4996 generic.go:334] "Generic (PLEG): container finished" podID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerID="88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7" exitCode=0 Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.920445 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldz7l" event={"ID":"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709","Type":"ContainerDied","Data":"88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7"} Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.920501 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ldz7l" event={"ID":"3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709","Type":"ContainerDied","Data":"57966e283ae6a358cf41701746267f6aaad7ce7c78f1318895837e1758fc671f"} Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.920524 4996 scope.go:117] "RemoveContainer" containerID="88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.920560 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ldz7l" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.944547 4996 scope.go:117] "RemoveContainer" containerID="5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab" Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.963373 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ldz7l"] Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.972876 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ldz7l"] Feb 28 11:23:01 crc kubenswrapper[4996]: I0228 11:23:01.981083 4996 scope.go:117] "RemoveContainer" containerID="90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188" Feb 28 11:23:02 crc kubenswrapper[4996]: I0228 11:23:02.013272 4996 scope.go:117] "RemoveContainer" containerID="88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7" Feb 28 11:23:02 crc kubenswrapper[4996]: E0228 11:23:02.013732 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7\": container with ID starting with 88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7 not found: ID does not exist" containerID="88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7" Feb 28 11:23:02 crc kubenswrapper[4996]: I0228 11:23:02.013771 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7"} err="failed to get container status \"88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7\": rpc error: code = NotFound desc = could not find container \"88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7\": container with ID starting with 88e220edf004fe693fdcb941bf5073b7bd55829575f9ef7456daec636ce33ec7 not found: ID does not exist" Feb 28 11:23:02 crc kubenswrapper[4996]: I0228 11:23:02.013798 4996 scope.go:117] "RemoveContainer" containerID="5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab" Feb 28 11:23:02 crc kubenswrapper[4996]: E0228 11:23:02.014227 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab\": container with ID starting with 5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab not found: ID does not exist" containerID="5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab" Feb 28 11:23:02 crc kubenswrapper[4996]: I0228 11:23:02.014258 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab"} err="failed to get container status \"5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab\": rpc error: code = NotFound desc = could not find container \"5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab\": container with ID starting with 5caf51842cfa2c626c4f202c3c3c30a4a37ef521881054f60741f61c64f70bab not found: ID does not exist" Feb 28 11:23:02 crc kubenswrapper[4996]: I0228 11:23:02.014276 4996 scope.go:117] "RemoveContainer" containerID="90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188" Feb 28 11:23:02 crc kubenswrapper[4996]: E0228 11:23:02.015100 4996 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188\": container with ID starting with 90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188 not found: ID does not exist" containerID="90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188" Feb 28 11:23:02 crc kubenswrapper[4996]: I0228 11:23:02.015128 4996 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188"} err="failed to get container status \"90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188\": rpc error: code = NotFound desc = could not find container \"90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188\": container with ID starting with 90ad06c00791252cb82e8d78e6d022cf3531389075cb8a90eebd385b82521188 not found: ID does not exist" Feb 28 11:23:03 crc kubenswrapper[4996]: I0228 11:23:03.052949 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" path="/var/lib/kubelet/pods/3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709/volumes" Feb 28 11:23:12 crc kubenswrapper[4996]: I0228 11:23:12.249565 4996 patch_prober.go:28] interesting pod/machine-config-daemon-jg4sj container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 11:23:12 crc kubenswrapper[4996]: I0228 11:23:12.250422 4996 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 11:23:12 crc kubenswrapper[4996]: I0228 11:23:12.250475 4996 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" Feb 28 11:23:12 crc kubenswrapper[4996]: I0228 11:23:12.251385 4996 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae"} pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 11:23:12 crc kubenswrapper[4996]: I0228 11:23:12.251457 4996 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" containerName="machine-config-daemon" containerID="cri-o://bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae" gracePeriod=600 Feb 28 11:23:12 crc kubenswrapper[4996]: E0228 11:23:12.369889 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:23:13 crc kubenswrapper[4996]: I0228 11:23:13.028138 4996 generic.go:334] "Generic (PLEG): container finished" podID="a98c14ee-40d6-4e30-9390-154743a75c63" containerID="bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae" exitCode=0 Feb 28 11:23:13 crc kubenswrapper[4996]: I0228 11:23:13.028201 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" event={"ID":"a98c14ee-40d6-4e30-9390-154743a75c63","Type":"ContainerDied","Data":"bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae"} Feb 28 11:23:13 crc kubenswrapper[4996]: I0228 11:23:13.028522 4996 scope.go:117] "RemoveContainer" containerID="806c079a5a1551c11992d62c6379f94a5a2b3cadbf51278f03d70633d462650c" Feb 28 11:23:13 crc kubenswrapper[4996]: I0228 11:23:13.029801 4996 scope.go:117] "RemoveContainer" containerID="bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae" Feb 28 11:23:13 crc kubenswrapper[4996]: E0228 11:23:13.030343 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:23:25 crc kubenswrapper[4996]: I0228 11:23:25.033568 4996 scope.go:117] "RemoveContainer" containerID="bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae" Feb 28 11:23:25 crc kubenswrapper[4996]: E0228 11:23:25.034869 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:23:36 crc kubenswrapper[4996]: I0228 11:23:36.033454 4996 scope.go:117] "RemoveContainer" containerID="bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae" Feb 28 11:23:36 crc kubenswrapper[4996]: E0228 11:23:36.035157 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:23:41 crc kubenswrapper[4996]: E0228 11:23:41.033945 4996 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="test-operator-logs-pod-horizontest-horizontest-tests-horizontest" hostnameMaxLen=63 truncatedHostname="test-operator-logs-pod-horizontest-horizontest-tests-horizontes" Feb 28 11:23:50 crc kubenswrapper[4996]: I0228 11:23:50.033276 4996 scope.go:117] "RemoveContainer" containerID="bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae" Feb 28 11:23:50 crc kubenswrapper[4996]: E0228 11:23:50.033938 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.150426 4996 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537964-mrb84"] Feb 28 11:24:00 crc kubenswrapper[4996]: E0228 11:24:00.151436 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerName="extract-utilities" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.151455 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerName="extract-utilities" Feb 28 11:24:00 crc kubenswrapper[4996]: E0228 11:24:00.151482 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerName="registry-server" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.151490 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerName="registry-server" Feb 28 11:24:00 crc kubenswrapper[4996]: E0228 11:24:00.151521 4996 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerName="extract-content" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.151529 4996 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerName="extract-content" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.151745 4996 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8b0e37-adc1-47b4-a9f0-33b1bfa3e709" containerName="registry-server" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.152561 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537964-mrb84" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.155123 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.157764 4996 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.159046 4996 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-5xnwq" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.171146 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537964-mrb84"] Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.231605 4996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf697\" (UniqueName: \"kubernetes.io/projected/9f0f3875-1c00-4f37-a52f-edb8efc646de-kube-api-access-xf697\") pod \"auto-csr-approver-29537964-mrb84\" (UID: \"9f0f3875-1c00-4f37-a52f-edb8efc646de\") " pod="openshift-infra/auto-csr-approver-29537964-mrb84" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.334731 4996 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf697\" (UniqueName: \"kubernetes.io/projected/9f0f3875-1c00-4f37-a52f-edb8efc646de-kube-api-access-xf697\") pod \"auto-csr-approver-29537964-mrb84\" (UID: \"9f0f3875-1c00-4f37-a52f-edb8efc646de\") " pod="openshift-infra/auto-csr-approver-29537964-mrb84" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.370199 4996 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf697\" (UniqueName: \"kubernetes.io/projected/9f0f3875-1c00-4f37-a52f-edb8efc646de-kube-api-access-xf697\") pod \"auto-csr-approver-29537964-mrb84\" (UID: \"9f0f3875-1c00-4f37-a52f-edb8efc646de\") " pod="openshift-infra/auto-csr-approver-29537964-mrb84" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.475180 4996 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537964-mrb84" Feb 28 11:24:00 crc kubenswrapper[4996]: I0228 11:24:00.901067 4996 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537964-mrb84"] Feb 28 11:24:01 crc kubenswrapper[4996]: I0228 11:24:01.454311 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537964-mrb84" event={"ID":"9f0f3875-1c00-4f37-a52f-edb8efc646de","Type":"ContainerStarted","Data":"91b476a7bdf85630e8e02b7a7e690a28b1dfe804b8ce7db92439fe8487cba29c"} Feb 28 11:24:02 crc kubenswrapper[4996]: I0228 11:24:02.466293 4996 generic.go:334] "Generic (PLEG): container finished" podID="9f0f3875-1c00-4f37-a52f-edb8efc646de" containerID="c901c038323d24a7bef56b832c8836f3c5e3c369d331394c879a5c125d106379" exitCode=0 Feb 28 11:24:02 crc kubenswrapper[4996]: I0228 11:24:02.466416 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537964-mrb84" event={"ID":"9f0f3875-1c00-4f37-a52f-edb8efc646de","Type":"ContainerDied","Data":"c901c038323d24a7bef56b832c8836f3c5e3c369d331394c879a5c125d106379"} Feb 28 11:24:03 crc kubenswrapper[4996]: I0228 11:24:03.861034 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537964-mrb84" Feb 28 11:24:03 crc kubenswrapper[4996]: I0228 11:24:03.907727 4996 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf697\" (UniqueName: \"kubernetes.io/projected/9f0f3875-1c00-4f37-a52f-edb8efc646de-kube-api-access-xf697\") pod \"9f0f3875-1c00-4f37-a52f-edb8efc646de\" (UID: \"9f0f3875-1c00-4f37-a52f-edb8efc646de\") " Feb 28 11:24:03 crc kubenswrapper[4996]: I0228 11:24:03.919362 4996 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0f3875-1c00-4f37-a52f-edb8efc646de-kube-api-access-xf697" (OuterVolumeSpecName: "kube-api-access-xf697") pod "9f0f3875-1c00-4f37-a52f-edb8efc646de" (UID: "9f0f3875-1c00-4f37-a52f-edb8efc646de"). InnerVolumeSpecName "kube-api-access-xf697". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 11:24:04 crc kubenswrapper[4996]: I0228 11:24:04.010180 4996 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf697\" (UniqueName: \"kubernetes.io/projected/9f0f3875-1c00-4f37-a52f-edb8efc646de-kube-api-access-xf697\") on node \"crc\" DevicePath \"\"" Feb 28 11:24:04 crc kubenswrapper[4996]: I0228 11:24:04.485245 4996 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537964-mrb84" event={"ID":"9f0f3875-1c00-4f37-a52f-edb8efc646de","Type":"ContainerDied","Data":"91b476a7bdf85630e8e02b7a7e690a28b1dfe804b8ce7db92439fe8487cba29c"} Feb 28 11:24:04 crc kubenswrapper[4996]: I0228 11:24:04.485283 4996 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b476a7bdf85630e8e02b7a7e690a28b1dfe804b8ce7db92439fe8487cba29c" Feb 28 11:24:04 crc kubenswrapper[4996]: I0228 11:24:04.485559 4996 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537964-mrb84" Feb 28 11:24:04 crc kubenswrapper[4996]: I0228 11:24:04.942272 4996 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537958-9n47s"] Feb 28 11:24:04 crc kubenswrapper[4996]: I0228 11:24:04.954747 4996 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537958-9n47s"] Feb 28 11:24:05 crc kubenswrapper[4996]: I0228 11:24:05.033238 4996 scope.go:117] "RemoveContainer" containerID="bef2dc663f5bf4e0a55ec1dd67c6eb09d55882a14baf9ab6f35c48c1b353b2ae" Feb 28 11:24:05 crc kubenswrapper[4996]: E0228 11:24:05.033617 4996 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jg4sj_openshift-machine-config-operator(a98c14ee-40d6-4e30-9390-154743a75c63)\"" pod="openshift-machine-config-operator/machine-config-daemon-jg4sj" podUID="a98c14ee-40d6-4e30-9390-154743a75c63" Feb 28 11:24:05 crc kubenswrapper[4996]: I0228 11:24:05.048445 4996 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb0cbbd-b638-4d2a-975d-ed3dddb3b031" path="/var/lib/kubelet/pods/ffb0cbbd-b638-4d2a-975d-ed3dddb3b031/volumes"